Adaptive gain control for digital audio samples in a media stream

Information

  • Patent Grant
  • 9491538
  • Patent Number
    9,491,538
  • Date Filed
    Thursday, March 21, 2013
    11 years ago
  • Date Issued
    Tuesday, November 8, 2016
    8 years ago
Abstract
An adaptive gain control system and related operating method for digital audio samples is provided. The method is suitable for use with a digital media encoding system that transmits encoded media streams to a remotely-located presentation device such as a media player. The method begins by initializing the processing of a media stream. Then, the method adjusts the gain of a first set of digital audio samples in the media stream using a fast gain adaptation scheme, resulting in a first group of gain-adjusted digital audio samples having normalized volume during presentation. The method continues by adjusting the gain of a second set of digital audio samples in the media stream using a steady state gain adaptation scheme that is different than the fast gain adaptation scheme, resulting in a second group of gain-adjusted digital audio samples having normalized volume during presentation.
Description
TECHNICAL FIELD

Embodiments of the subject matter described herein relate generally to the processing of digital audio samples in a media stream. More particularly, embodiments of the subject matter relate to digitally adjusting gain of digital audio samples in a media stream such that the perceived volume is normalized during presentation.


BACKGROUND

Recently, consumers have expressed significant interest in “place-shifting” devices that allow viewing of television or other media content at locations other than their primary media presentation device. Place-shifting devices typically packetize media content that can be transmitted over a local or wide area network to a portable computer, mobile phone, personal digital assistant, remote television or other remote device capable of playing back the packetized media stream for the viewer. Place-shifting therefore allows consumers to view their media content from remote locations such as other rooms, hotels, offices, and/or any other locations where portable media player devices can gain access to a wireless or other communications network.


While place-shifting does greatly improve the convenience afforded to the end user, there remain some challenges related to the manner in which different media streams are presented at the end device. For instance, the digital audio samples in one media stream may be associated with a baseline or average presentation loudness or volume, while the digital audio samples in another media stream may be associated with a different baseline/average presentation loudness or volume. Thus, if the user switches between different media streams the perceived loudness may be inconsistent, and the user will therefore need to adjust the volume control on the presentation device.


Volume normalization techniques can be utilized to automatically adjust the volume perceived by the user. Some volume normalization techniques operate in the analog domain, and others operate in the digital domain. Digital volume normalization techniques are best suited for place-shifting applications because the media streams are encoded and transmitted to the presentation device using data packets. Unfortunately, existing digital volume normalization techniques tend to be ineffective and/or they introduce audible artifacts that can be distracting to the user.


BRIEF SUMMARY

An adaptive gain control method for digital audio samples is provided. The method begins by initializing processing of a media stream. The method continues by adjusting gain of a first set of digital audio samples in the media stream using a fast gain adaptation scheme, resulting in a first group of gain-adjusted digital audio samples having normalized volume during presentation. Thereafter, the method adjusts gain of a second set of digital audio samples in the media stream using a steady state gain adaptation scheme that is different than the fast gain adaptation scheme, resulting in a second group of gain-adjusted digital audio samples having normalized volume during presentation.


Also provided is a computer program product, which is tangibly embodied in a computer-readable medium. The computer program product is operable to cause a digital media processing device to perform operations for a media stream. These operations include: calculating a loudness estimate for a current block of digital audio samples in the media stream; calculating a reference gain value for the current block of digital audio samples, the reference gain value being influenced by the loudness estimate; calculating a maximum gain value for the current block of digital audio samples; calculating an estimated gain value for the current block of digital audio samples, the estimated gain value being influenced by the reference gain value and the maximum gain value; and calculating a gain value for the current block of digital audio samples, the gain value being influenced by the estimated gain value, the maximum gain value, and a previous gain value for a previous block of digital audio samples in the media stream. The computer program product is also operable to cause the digital media processing device to modify the current block of digital audio samples by applying the gain value to the digital audio samples in the current block of digital audio samples. In certain embodiments the maximum gain value is influenced by dynamic range of the current block of digital audio samples.


A system for processing digital audio samples in a media stream is also provided. The system includes a first means for adjusting gain of a first block of digital audio samples in the media stream using a fast gain adaptation scheme, resulting in a first block of gain-adjusted digital audio samples. The system also includes a second means for adjusting gain of a second block of digital audio samples in the media stream using a steady state gain adaptation scheme that is different than the fast gain adaptation scheme, resulting in a second block of gain-adjusted digital audio samples. The system also includes means for transmitting gain-adjusted digital audio samples to a remotely-located media player.


This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of the subject matter may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.



FIG. 1 is a schematic representation of an embodiment of a media presentation system;



FIG. 2 is a schematic representation of an embodiment of a digital media processing device;



FIG. 3 is a flow chart that illustrates an embodiment of an adaptive digital gain control process; and



FIG. 4 is a flow chart that illustrates an embodiment of a digital gain calculation process.





DETAILED DESCRIPTION

The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.


Techniques and technologies may be described herein in terms of functional and/or logical block components, and with reference to symbolic representations of operations, processing tasks, and functions that may be performed by various computing components or devices. Such operations, tasks, and functions are sometimes referred to as being computer-executed, computerized, software-implemented, or computer-implemented. In practice, one or more processor devices can carry out the described operations, tasks, and functions by manipulating electrical signals representing data bits at memory locations in the system memory, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to the data bits. It should be appreciated that the various block components shown in the figures may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.


When implemented in software or firmware, various elements of the systems described herein are essentially the code segments or instructions that perform the various tasks. The program or code segments can be stored in a processor-readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication path. The “processor-readable medium” or “machine-readable medium” may include any medium that can store or transfer information. Examples of the processor-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, a CD-ROM, an optical disk, a hard disk, or the like. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic paths, or RF links. The code segments may be downloaded via computer networks such as the Internet, an intranet, a LAN, or the like.


According to various embodiments, the perceived presentation loudness (i.e., volume) of a media stream is normalized or leveled relative to a reference loudness, such that different media streams are presented at about the same average loudness for a constant volume setting at the presentation device. The volume normalization scheme is carried out in the digital domain by modifying, adjusting, or otherwise altering the digital audio samples associated with the media streams. In certain embodiments, the digital audio samples are modified by a digital media processing device that encodes and transmits media streams (via a data communication network) to the user's media presentation device (e.g., a laptop computer, a cell phone, a remote set-top box, or the like). The digitally normalized audio samples are transmitted to the presentation device in the desired media stream, resulting in normalized presentation volume for different media streams. Notably, the presentation device itself need not be modified to support the digital volume normalization techniques described here because the digital audio samples arrive at the presentation device after application of digital gain adjustment.


Turning now to the figures and with initial reference to FIG. 1, an exemplary embodiment of a media presentation system 100 can be utilized to carry out place-shifting of digital media content that includes digital audio samples. This particular embodiment of the system 100 includes a digital media processing device (e.g., a place-shifting encoder system 102) that receives media content 122 from a content source 106, encodes the received content into a streaming format, and then transmits the encoded media stream 120 to a remotely-located digital media player (or other presentation device) 104 over a network 110. The media player 104 receives the encoded media stream 120, decodes the stream, and presents the decoded content to a viewer on a television or other display 108. Although not depicted in FIG. 1, the media player 104 includes or cooperates with at least one speaker, audio transducer, or other sound-generating element that supports the presentation of the audio portion of media streams. In various embodiments, a server 112 may also be provided to communicate with the encoder system 102 and/or the media player 104 via the network 110 to assist these devices in locating each other, maintaining security, providing or receiving content or information, and/or any other features as desired. This feature is not required in all embodiments, however, and the concepts described herein may be deployed in any data streaming application or environment, including place-shifting but also any other media or other data streaming situation.


The encoder system 102 is any component, hardware, software logic and/or the like capable of transmitting a packetized stream of media content over the network 110. In various embodiments, the encoder system 102 incorporates suitable encoder and/or transcoder (collectively “encoder”) logic to convert audio/video or other media content 122 into a packetized format that can be transmitted over the network 110. The media content 122 may be received in any format, and may be received from any internal or external content source 106 such as any sort of broadcast, cable or satellite television programming source, a “video-on-demand” or similar source, a digital video disk (DVD) or other removable media, a video camera, and/or the like. The encoder system 102 encodes the media content 122 to create the encoded media stream 120 in any manner. In various embodiments, the encoder system 102 contains a transmit buffer 105 that temporarily stores encoded data prior to transmission on the network 110.


In practice, an embodiment of the encoder system 102 may be implemented using any of the various SLINGBOX products available from Sling Media of Foster City, Calif., although other products could be used in other embodiments. Certain embodiments of the encoder system 102 are generally capable of receiving the media content 122 from an external content source 106 such as any sort of digital video recorder (DVR), set top box (STB), cable or satellite programming source, DVD player, and/or the like. In such embodiments, the encoder system 102 may additionally provide commands 124 to the content source 106 to produce the desired media content 122. Such commands 124 may be provided over any sort of wired or wireless interface, such as an infrared or other wireless transmitter that emulates remote control commands receivable by the content source 106. Other embodiments, however, particularly those that do not involve place-shifting, may modify or omit this feature entirely.


In other embodiments, the encoder system 102 may be integrated with any sort of content-receiving or other capabilities typically affiliated with the content source 106. The encoder system 102 may be a hybrid STB or other receiver, for example, that also provides transcoding and place-shifting features. Such a device may receive satellite, cable, broadcast and/or other signals that encode television programming or other content received from an antenna, modem, server and/or other source. A receiver of the encoder system 102 may further demodulate or otherwise decode the received signals to extract programming that can be locally viewed and/or place-shifted to the remotely-located media player 104 as appropriate. In this regard, the encoder system 102 may also include a content database stored on a hard disk drive, memory, or other storage medium to support a personal or digital video recorder (DVR) feature or other content library as appropriate. Hence, in some embodiments, the content source 106 and the encoder system 102 may be physically and/or logically contained within a common component, housing or chassis.


In still other embodiments, the encoder system 102 includes or is implemented as a software program, applet, or the like executing on a conventional computing system (e.g., a personal computer). In such embodiments, the encoder system 102 may encode, for example, some or all of a screen display typically provided to a user of the computing system for place-shifting to a remote location. One device capable of providing such functionality is the SlingProjector product available from Sling Media of Foster City, Calif., which executes on a conventional personal computer, although other products could be used as well.


The media player 104 is any device, component, module, hardware, software and/or the like capable of receiving the encoded media stream 120 from one or more encoder systems 102. In various embodiments, the media player 104 is personal computer (e.g., a “laptop” or similarly portable computer, although desktop-type computers could also be used), a mobile phone, a personal digital assistant, a personal media player, or the like. In many embodiments, the media player 104 is a general purpose computing device that includes a media player application in software or firmware that is capable of securely connecting to the encoder system 102, and is capable of receiving and presenting media content to the user of the device as appropriate. In other embodiments, however, the media player 104 is a standalone or other separate hardware device capable of receiving the encoded media stream 120 via any portion of the network 110 and decoding the encoded media stream 120 to provide an output signal 126 that is presented on the display 108. One example of a standalone media player 104 is the SLINGCATCHER product available from Sling Media of Foster City, Calif., although other products could be equivalently used.


The network 110 is any digital or other communications network capable of transmitting messages between senders (e.g., the encoder system 102) and receivers (e.g., the media player 104). In various embodiments, the network 110 includes any number of public or private data connections, links or networks supporting any number of communications protocols. The network 110 may include the Internet, for example, or any other network based upon TCP/IP or other conventional protocols. In various embodiments, the network 110 also incorporates a wireless and/or wired telephone network, such as a cellular communications network for communicating with mobile phones, personal digital assistants, and/or the like. The network 110 may also incorporate any sort of wireless or wired local area networks, such as one or more IEEE 802.3 and/or IEEE 802.11 networks.


The encoder system 102 and/or the media player 104 are therefore able to communicate in any manner with the network 110 (e.g., using any sort of data connections 128 and/or 125, respectively). Such communication may take place over a wide area link that includes the Internet and/or a telephone network, for example; in other embodiments, communications between the encoder system 102 and the media player 104 may take place over one or more wired or wireless local area links that are conceptually incorporated within the network 110. In various equivalent embodiments, the encoder system 102 and the media player 104 may be directly connected via any sort of cable (e.g., an Ethernet cable or the like) with little or no other network functionality provided.


Many different place-shifting scenarios could be formulated based upon available computing and communications resources, consumer demand and/or any other factors. In various embodiments, consumers may wish to place-shift content within a home, office or other structure, such as from the encoder system 102 to a desktop or portable computer located in another room. In such embodiments, the content stream will typically be provided over a wired or wireless local area network operating within the structure. In other embodiments, consumers may wish to place-shift content over a broadband or similar network connection from a primary location to a computer or other remote media player 104 located in a second home, office, hotel or other remote location. In still other embodiments, consumers may wish to place-shift content to a mobile phone, personal digital assistant, media player, video game player, automotive or other vehicle media player, and/or other device via a mobile link (e.g., a GSM/EDGE or CDMA/EVDO connection, any sort of 3G or subsequent telephone link, an IEEE 802.11 “Wi-Fi” link, and/or the like). Several examples of place-shifting applications available for various platforms are provided by Sling Media of Foster City, Calif., although the concepts described herein could be used in conjunction with products and services available from any source.



FIG. 2 is a schematic representation of an embodiment of a digital media processing device, such as the encoder system 102. Again, the encoder system 102 generally creates an encoded media stream 120 that is routable on the network 110 based upon the media content 122 received from the content source 106. In this regard, and with reference now to FIG. 2, the encoder system 102 typically includes an encoding module 202, a transmit buffer 105, and a network interface 206 in conjunction with appropriate control logic, which may be associated with a control module 205. In operation, the encoding module 202 typically receives the media content 122 from the internal or external content source 106, encodes the data into the desired format for the encoded media stream 120, and stores the encoded data in the transmit buffer 105. The network interface 206 then retrieves the formatted data from the transmit buffer 105 for transmission on the network 110. The control module 205 suitably monitors and controls the encoding and network transmit processes carried out by the encoding module 202 and the network interface 206, respectively, and may perform other functions as well. The encoder system 102 may also have a command module 208 or other feature capable of generating and providing the commands 124 to the content source 106, as described above.


As noted above, creating the encoded media stream 120 typically involves encoding and/or transcoding the media content 122 received from the content source 106 into a suitable digital format that can be transmitted on the network 110. Generally, the encoded media stream 120 is placed into a standard or other known format (e.g., the WINDOWS MEDIA format available from the Microsoft Corporation of Redmond, Wash., the QUICKTIME format, the REALPLAYER format, an MPEG format, and/or the like) that can be transmitted on the network 110. This encoding may take place, for example, in any sort of encoding module 202 as appropriate. The encoding module 202 may be any sort of hardware (e.g., a digital signal processor or other integrated circuit used for media encoding), software (e.g., software or firmware programming used for media encoding that executes at the encoder system 102), or the like. The encoding module 202 is therefore any feature that receives the media content 122 from content source 106 (e.g., via any sort of hardware and/or software interface) and encodes or transcodes the received data into the desired format for transmission on the network 110. Although FIG. 2 shows a single encoding module 202, in practice the encoder system 102 may include any number of encoding modules 202. Different encoding modules 202 may be selected based upon the type of media player 104, network conditions, user preferences, and/or the like.


In various embodiments, the encoding module 202 may also apply other modifications, transforms, and/or filters to the received content before or during the encoding/transcoding process. Video signals, for example, may be resized, cropped and/or skewed. Similarly, the color, hue and/or saturation of the signal may be altered, and/or noise reduction or other filtering may be applied. Digital rights management encoding and/or decoding may also be applied in some embodiments, and/or other features may be applied as desired. Audio signals may be modified by adjusting sampling rate, mono/stereo parameters, noise reduction, multi-channel sound parameters and/or the like. In this regard, digital audio samples in a media stream can be modified in accordance with the adaptive digital gain control techniques and methodologies described in more detail below. Such gain control techniques can be used to modify blocks of digital audio samples to normalize the volume or loudness perceived by the user during presentation of the media stream.


The network interface 206 refers to any hardware, software and/or firmware that allows the encoder system 102 to communicate on the network 110. In various embodiments, the network interface 206 includes suitable network stack programming and other features and/or conventional network interface (NIC) hardware such as any wired or wireless interface as desired.


In various embodiments, the control module 205 monitors and controls the encoding and transmit processes performed by the encoding module 202 and the network interface 206, respectively. To that end, the control module 205 is any hardware, software, firmware or combination thereof capable of performing such features. In various embodiments, the control module 205 further processes commands received from the remote media player via the network interface 206 (e.g., by sending the commands 124 to the content source 106 via the command module 208 or the like). The control module 205 may also transmit commands to the media player 104 via the network interface 206 and/or may control or otherwise effect any other operations of the encoder system 102. In various embodiments, the control module 205 implements the control features used to monitor and adjust the operation of the encoding module 202 and/or the network interface 206 to efficiently provide the media stream to the media player 104.


Certain embodiments of the encoding module 202 may include or execute one or more computer programs (e.g., software) that are tangibly embodied in appropriately configured computer-readable media 210. The computer program product is operable to cause the encoder system 102 to perform certain operations on media streams, as described in more detail below with reference to FIG. 3 and FIG. 4. For this embodiment, FIG. 2 depicts the computer-readable media 210 associated with the control module 205, although such an association need not be employed in all implementations. Indeed, the computer-readable media 210 may alternatively (or additionally) be utilized in connection with the encoding module 202 if so desired. As mentioned above, the computer-readable media 210 may include, without limitation: an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, a CD-ROM, an optical disk, a hard disk, or the like.



FIG. 3 is a flow chart that illustrates an embodiment of an adaptive digital gain control process 300, and FIG. 4 is a flow chart that illustrates an embodiment of a digital gain calculation process 400. The various tasks performed in connection with an illustrated process may be performed by software, hardware, firmware, or any combination thereof. The operations and instructions associated with a described process may be executed by any processor and/or other processing features within the encoder system 102, and the particular means used to implement each of the various functions shown in the figures, then, could be any sort of processing hardware (such as the control module 205 of FIG. 2) executing software or processor-based logic in any format. It should be appreciated that a described process may include any number of additional or alternative tasks, or may omit one or more illustrated tasks. Moreover, the tasks shown in the figures need not be performed in the illustrated order, and a described process may be incorporated into a more comprehensive procedure or process having additional functionality not described in detail herein.


Referring now to FIG. 3, the adaptive digital gain control process 300 can be started when a new media stream has been designated for encoding and delivery to a presentation device. A media stream may include or otherwise be associated with a plurality of digital audio samples. As used here, a digital audio sample corresponds to a digital representation of an analog audio signal taken at a point in time (or over a relatively short period of time), as is well understood. The magnitude of the analog audio signal is converted into a digital representation, and the digital audio sample includes a number of bits that conveys that digital representation. In certain embodiments, one digital audio sample is represented by sixteen bits (although the number of bits in practice may be more or less than sixteen if so desired). In accordance with some embodiments, the sampling rate for digital audio samples is within the range of about 16,000 to 48,000 samples per second, and certain embodiments utilize a sampling rate of 32,000 samples per second.


As used here, a “block” of samples refers to a set, group, or other collection of samples contained in a media stream. The number of samples in a block can be arbitrarily defined, or the number may be selected for compatibility with certain data communication standards, streaming media standards, hardware requirements, and/or software requirements. In certain embodiments, one block of digital audio samples is represented by 1024 consecutive samples (although the number of samples per block may be more or less than 1024 if so desired). Consequently, for a sampling rate of 32,000 samples per second, one block of digital audio samples represents about 32 ms of time.


The process 300 is able to respond in a real-time and dynamic manner to accommodate different media streams (the end user may switch from one media stream to another for presentation at the media player). In this regard, the process 300 begins by initializing processing of a media stream and entering a fast adaptation mode (task 302). The fast adaptation mode represents an initial training or learning period for the gain control technique described herein. In practice, the fast adaptation mode quickly adjusts the gain of the initial digital audio samples in the media stream so that the volume perceived by the user can be normalized (if needed) by at least some amount. Thereafter, the encoder system can transition to a steady state mode during which the gain control is adjusted in a more accurate and controlled manner.


For processing of digital audio samples during the fast adaptation mode, the encoder system utilizes a designated set of weighting factors; these weighting factors are used to calculate the gain to be applied to blocks of digital audio samples. The weighting factors control the extent to which the gain applied to adjacent blocks of digital audio samples can differ. The weighting factors and exemplary gain calculation methodologies are described in more detail below with reference to FIG. 4. In practice, the weighting factors are empirically determined values that are accessible by the encoder system. Although any number of weighting factors could be used, this particular embodiment uses two weighting factors, which are labeled w1 and w2 herein. Moreover, the values of w1 and w2 are variable for the fast gain adaptation scheme, and the values of w1 and w2 are fixed for the steady state gain adaptation scheme. For the embodiment described here, w1 and w2 are positive and satisfy the relationship w1+w2=1.0 in both fast adaptation and steady state modes of operation.


In certain situations, the value of w1 for the fast adaptation mode is less than or equal to the value of w1 for the steady state mode, and the value of w2 for the fast adaptation mode is greater than or equal to the value of w2 for the steady state mode. In certain situations, w1>w2 for both the fast adaptation mode and the steady state mode. In certain situations, the difference w1−w2 for the fast adaptation mode is less than the difference w1−w2 for the steady state mode. As one specific non-limiting example, in the fast adaptation mode, w1=0.7 and w2=0.3, and in the steady state mode, w1=0.9 and w2=0.1.


For certain embodiments, the value of w1 in the fast adaptation mode is always less than the value of w1 in the steady state mode (although at the transition between modes the value might be the same), the value of w2 in the fast adaptation mode is always greater than the value of w2 in the steady state mode (although at the transition between modes the value might be the same), and the value of w1 is always greater than the value of w2 (for both modes). Moreover, the values of w1 and w2 need not remain constant during the fast adaptation mode. Indeed, these values can be adjusted during the fast adaptation mode to arrive at the values to be used during the steady state mode. In a typical implementation, w1 increases linearly from one value (for example, 0.7) to another value (for example, 0.9), and w2 decreases linearly from one value (for example, 0.3) to another value (for example, 0.1) during the fast adaptation mode.


For example, suppose that the process 300 remains in the fast adaptation mode for a hundred blocks of audio samples. When the process 300 enters the fast adaptation mode, the values of w1 and w2 are initialized to 0.7 and 0.3, respectively. After the gain adjustment of the first block of audio samples, w1 is increased by







0.9
-
.07

100





and w2 is decreased by








0.3
-
0.1

100

.





This modification of w1 and w2 happens after gain adjustment of each block of digital audio samples as long as the process 300 remains in the fast adaptation mode. By the time the process 300 reaches the end of the fast adaptation mode, the linear adjustments of w1 and w2 result in values of 0.9 and 0.1, respectively. Thereafter, the process 300 enters the steady state mode with these values. In the steady state adaptation mode, w1 and w2 do not undergo any further changes and they remain constant at their respective values (0.9 and 0.1 for this example). Later at some point in time, when the system switches back to the fast adaptation mode, w1 and w2 are again reset to their initial values (0.7 and 0.3 for this example) and linear adjustment occurs as explained above.


Referring back to FIG. 3, the process 300 can compute or access the gain weighting factors for the fast adaptation mode (task 304), and obtain the next block of digital audio samples for processing (task 306) during the fast adaptation mode. The process 300 then continues by calculating a gain value for the current block and adjusting the gain of the digital audio samples in the current block in accordance with the calculated gain value (task 308). Such adjustment or modification of the digital audio samples results in gain-adjusted digital audio samples having normalized volume (loudness) during presentation at the destination media player. During task 308, the gain value for the current block is calculated using the currently applicable weighting factors (w1 and w2) corresponding to the fast adaptation mode. In this regard, the gain of the digital audio samples is adjusted using a fast gain adaptation scheme for as long as the process 300 remains in the fast adaptation mode. For the embodiments described here, the gain value represents a multiplicative gain that is used as a multiplier for the original non-adjusted digital audio sample value. Thus, a gain value of one represents no change and the original digital audio sample value will remain unchanged with a gain value of one.


The process 300 transmits the gain-adjusted digital audio samples to the remotely-located digital media player (task 310) in an ongoing manner. In certain embodiments, task 310 transmits the gain-adjusted digital audio samples in blocks, as is well understood. Moreover, the gain-adjusted digital audio samples will typically be transmitted in a media stream that also includes or otherwise conveys video content. Upon receipt, the media player simply decodes and presents the media stream to the user as usual. The media player need not perform any additional or special processing to implement the volume normalizing technique described here because the digital audio samples received by the media player are already gain-adjusted.


As mentioned previously, the fast adaptation mode is utilized as a brief training or learning period for new media streams. Accordingly, the encoder system may determine, detect, or otherwise be instructed to switch from the fast adaptation mode to the steady state mode (query task 312). If the process 300 detects a mode switching condition, then it can enter the steady state mode (task 314). Otherwise, the process 300 can return to task 304 to compute or access the newly adjusted values of w1 and w2, obtain the next block of audio samples for processing, and continue as described previously. The mode switching condition can be associated with one or any number of appropriate metrics, measures, or parameters. For example, the fast adaptation mode may remain active for a predetermined time period after initializing the processing of the current media stream, for a predetermined time period after the system enters the fast adaptation mode, for a predetermined time period after the weighting factors are computed or retrieved in task 304, etc. In typical implementations, the fast adaptation mode lasts for about four to eight seconds. As another example, the fast adaptation mode may remain active for a predetermined number of blocks (or samples) after initializing the processing of the current media stream, for a predetermined number of blocks (or samples) after the system enters the fast adaptation mode, for a predetermined number of blocks (or samples) after the weighting factors are retrieved in task 304, etc.


Assuming that it is time for the encoder system to switch modes, the process will enter and initiate the steady state mode. The steady state mode represents a “long term” and relatively stable period for the gain control technique described herein. In practice, the steady state mode takes over after the fast adaptation mode has made its initial gain adjustments. During the steady state mode, the gain of the digital audio samples is adjusted in an ongoing and accurate manner so that the volume perceived by the user remains normalized (if needed) relative to the reference volume level.


For the steady state mode, the process 300 retrieves or accesses the gain weighting factors for the steady state mode (task 316), which are different than the gain weighting factors used during the fast adaptation mode. The process 300 also obtains the next block of digital audio samples for processing (task 318) during the steady state mode. The process 300 then continues by calculating the gain value for the current block and adjusting the gain of the digital audio samples in the current block in accordance with the calculated gain value (task 320). During task 320, the gain value for the current block is calculated using the gain weighting factors (w1 and w2) corresponding to the steady state mode. Therefore, the gain of the digital audio samples is adjusted using a steady state gain adaptation scheme for as long as the process 300 remains in the steady state mode, where the steady state gain adaptation scheme is different than the fast gain adaptation scheme. The process 300 transmits the gain-adjusted digital audio samples to the remotely-located digital media player (task 322) as described above for task 310. Again, the media player need not perform any additional or special processing to implement the volume normalizing technique described here because the digital audio samples received by the media player are already gain-adjusted.


As mentioned previously, the process 300 can be repeated for each new media stream. Thus, the encoder system may determine, detect, or otherwise be instructed to switch from the current audio stream to a new audio stream (query task 324). If the process 300 detects a new media or audio stream, then it can initialize the processing of the new media stream and again enter the fast adaptation mode (task 302). Otherwise, the process 300 can return to task 318, obtain the next block of audio samples for processing in the steady state mode, and continue as described previously.


Although the embodiment described here uses two modes (fast adaptation and steady state), an adaptive digital gain control technique could instead employ only one mode, or it could employ more than two different modes. The use of two different modes strikes a good balance between audio quality, effectiveness, and normalization speed.


Referring now to FIG. 4, the digital gain calculation process 400 can be utilized by the encoder system during the adaptive digital gain control process 300. The process 400 is performed during both the fast adaptation mode and the steady state mode (with different values for the weighting factors w1 and w2, as explained previously). The process 400 may begin (task 402) with the first block of digital audio samples (where k indicates the block number), and by obtaining that block of digital audio samples in the media stream for processing (task 404). The process 400 considers the digital audio samples in blocks because a single audio sample conveys no inherent loudness or volume information by itself. For this embodiment, the process 400 calculates a loudness estimate (Lk) for the current block of digital audio samples (task 406), where the kth block includes N samples: {a1, a2, a3, . . . aN}. Although other estimating methodologies could be employed, the embodiment described here calculates the loudness estimate in accordance with the expression








L
k

=




i
=
1


i
=
N






a
i





,





where Lk is the loudness estimate, ai represents the digital audio samples, and the current block includes N digital audio samples. The absolute value of each audio sample is taken because any given audio sample may be positive or negative, depending upon its intended sound pressure direction relative to the listener's eardrums. Thus, the process 400 calculates the loudness estimate as a sum of the “magnitudes” of the audio samples contained in the current block.


The calculated loudness estimate can then be compared to a silence threshold value (query task 408). The silence threshold value may be empirically determined and defined such that it is low enough to serve as an accurate threshold and high enough to contemplate bit errors, artifacts, inconsistencies in the original audio data, and “non-zero” audio samples that cannot be detected as sound. If the calculated loudness estimate is less than the silence threshold, then the process 400 can apply a multiplicative gain of one (or any baseline value, which may but need not be approximately equal to one) to the current block of digital audio samples (task 410). In other words, the process 400 assumes that the gain value (gk) for the current block will be equal to one. Thus, if the process 400 determines that the current block represents silence, then there is no need to apply any gain, and the remainder of the process 400 can be bypassed. As explained above with reference to FIG. 3 and the process 300, while operating in the fast adaptation mode, the values of w1 and w2 are linearly updated on a block-by-block basis (task 412). In this regard, task 412 leads back to task 404 so that the process 400 can obtain the next block for processing. If in the steady state mode, then task 412 would be bypassed.


If query task 408 determines that the loudness estimate is not less than the silence threshold, then the process 400 may continue by determining or calculating a reference gain value (gr) that is influenced by the loudness estimate (task 414). More specifically, the reference gain value is based upon the loudness estimate and a reference loudness value. Although other methodologies could be employed, the embodiment described here calculates the reference gain value in accordance with the expression







g
r

=



L
ref


L
k


.






where gr is the reference gain value, and Lref is the reference loudness value. The reference loudness value is a constant value that represents, corresponds to, or otherwise indicates the desired normalized volume. Ideally and theoretically, therefore, gain-adjusted digital audio samples will be characterized by an adjusted loudness that corresponds to the reference loudness value. The reference gain value is used later in the process 400.


The process 400 may also calculate a maximum gain value (gm) for the current block of digital audio samples (task 416). Although other methodologies could be employed, the embodiment described here calculates the maximum gain value in accordance with the expression








g
m

=


2

n
-
1



a
max



,





where n is the number of bits per digital audio sample, and where amax is the maximum absolute sample value in the current block of digital audio samples. The process 400 determines the maximum allowable gain value for the current block in this manner to prevent bit overflow in the digital audio samples of the block. For example, if the current block includes a digital audio sample that has a relatively high value that approaches the maximum sample value, then very little multiplicative gain can be applied to that sample without causing overflow. On the other hand, if all of the samples in the block have relatively low values, then a higher amount of multiplicative gain can be applied to the block.


This particular embodiment also determines or calculates an estimated gain value (ge) for the current block of digital audio samples (task 418), where the estimated gain value is influenced by the reference gain value and/or by the maximum gain value. More specifically, the estimated gain value is based upon the reference gain value and the maximum gain value. Although other techniques could be employed, the embodiment described here calculates the estimated gain value in accordance with the expression ge=min(gr, gm), where ge is the estimated gain value. Thus, the estimated gain value will be equal to either the reference gain value or the maximum gain value, whichever one is lower (or equal to both if they are the same).


Next, the process 400 calculates the gain value (gk) to be applied to the current block of digital audio samples (task 420). Although other methodologies could be employed, the embodiment described here calculates the estimated gain value in accordance in accordance with the expression gk=min(w1×gk-1+w2×ge, gm), where gk is the computed gain value, and w1 and w2 are the weighting factors, which were described previously. The expression for gk includes a minimum operator that selects one of two values, whichever is lower (or selects either value if they are both the same). The first value is defined by the term w1×gk-1+w2×ge, and the second value is the maximum gain value (gm). As indicated by this expression, the gain value will be influenced by the estimated gain value, the maximum gain value, and a previous gain value (gk-1) for a previous block of digital audio samples in the media stream. For the reasons described above, this computed gain value will also be influenced by the reference gain value calculated during task 414. For this particular embodiment, the gain value for the current block (gk) is determined in response to the gain value for the immediately preceding block (gk-1). Alternatively (or additionally), gk could be calculated by considering other gain values for blocks prior to the immediately preceding block. This reliance on a previous gain value prevents large block-to-block variations in the applied gain. In practice, gk will have a value that ranges from 1.0 to about 8.0, although values that exceed 8.0 might be realized in certain embodiments.


The computed gain value for the current block of digital audio samples can then be applied to the samples in the block (task 422). In practice, task 422 modifies, adjusts, or otherwise changes the current block of digital audio samples. More specifically, task 422 modifies each sample in the current block by multiplying the original sample value by gk, resulting in a gain-adjusted sample value. As mentioned above with reference to the process 300, the gain-adjusted sample values are sent to the remotely-located media player, which can then present the media stream with a normalized loudness. FIG. 4 depicts task 422 leading back to task 412 as an indication of the potentially ongoing block-by-block nature of the process 400. As described previously, the process 400 can be initially performed for the fast adaptation mode (using the linearly adjusted set of weighting factors) and then repeated for the steady state mode (using a fixed set of weighting factors).


While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.

Claims
  • 1. A non-transitory and tangible computer-readable medium having instructions operable to cause a digital media processing device to perform operations for a media stream, comprising: calculating, by the digital media processing device, a loudness estimate for a current block of digital audio samples in the media stream, the loudness estimate calculated as a sum of magnitudes of the digital audio samples contained in the current block;calculating, by the digital media processing device, a reference gain value for the current block of digital audio samples, the reference gain value being equal to a constant reference loudness value divided by the loudness estimate;calculating, by the digital media processing device, a maximum gain value for the current block of digital audio samples, based on a maximum absolute sample value in the current block of digital audio samples;choosing, by the digital media processing device, an estimated gain value for the current block of digital audio samples, the estimated gain value being equal to either the reference gain value or the maximum gain value, whichever is less;selecting, by the digital media processing device, either a first current gain value or a second current gain value for the current block of digital audio samples, whichever is less, the first current gain value comprising a weighted sum of the estimated gain value and a previous gain value applied to a previous block of digital audio samples in the media stream, and the second current gain value comprising the maximum gain value, to obtain a selected current gain value;modifying, by the digital media processing device, the current block of digital audio samples by applying the selected current gain value to the digital audio samples in the current block of digital audio samples, resulting in gain-adjusted digital audio samples; andtransmitting, by the digital media processing device, the gain-adjusted digital audio samples to a remotely-located media player to generate sound based on the gain-adjusted digital audio samples.
  • 2. The computer-readable medium of claim 1, wherein the computer program product is operable to cause the digital media processing device to perform further operations, comprising: comparing the loudness estimate to a silence threshold value; andapplying a multiplicative gain value of one to the current block of digital audio samples when the loudness estimate is less than the silence threshold value.
  • 3. The computer-readable medium of claim 1, wherein calculating the loudness estimate is performed in accordance with the expression
  • 4. The computer-readable medium of claim 3, wherein calculating the reference gain value is performed in accordance with the expression
  • 5. The computer-readable medium of claim 4, wherein: calculating the maximum gain value is performed in accordance with the expression
  • 6. The computer-readable medium of claim 5, wherein the selecting is performed in accordance with the expression gk=min(w1gk-1+w2 ge, gm), where gk is the selected current gain value, and w1 and w2 are weighting factors.
  • 7. The computer-readable medium of claim 1, wherein modifying the current block of digital audio samples is performed for a predetermined time period after initializing processing of the media stream.
  • 8. The computer-readable medium of claim 1, wherein modifying the current block of digital audio samples is performed for a predetermined number of blocks of the digital audio samples after initializing processing of the media stream.
  • 9. The computer-readable medium of claim 1, wherein: one digital audio sample is represented by sixteen bits; andone block of digital audio samples is represented by 1024 digital audio samples.
  • 10. A method of performing operations on a media stream, the method comprising: calculating, by a digital media processing device, a loudness estimate for a current block of digital audio samples in the media stream, the loudness estimate calculated as a sum of magnitudes of the digital audio samples contained in the current block;calculating, by the digital media processing device, a reference gain value for the current block of digital audio samples, the reference gain value being equal to a constant reference loudness value divided by the loudness estimate;calculating, by the digital media processing device, a maximum gain value for the current block of digital audio samples, based on a maximum absolute sample value in the current block of digital audio samples;choosing, by the digital media processing device, an estimated gain value for the current block of digital audio samples, the estimated gain value being equal to either the reference gain value or the maximum gain value, whichever is less;selecting, by the digital media processing device, either a first current gain value or a second current gain value for the current block of digital audio samples, whichever is less, the first current gain value comprising a weighted sum of the estimated gain value and a previous gain value applied to a previous block of digital audio samples in the media stream, and the second current gain value comprising the maximum gain value, to obtain a selected current gain value;modifying, by the digital media processing device, the current block of digital audio samples by applying the selected current gain value to the digital audio samples in the current block of digital audio samples, resulting in gain-adjusted digital audio samples; andtransmitting, by the digital media processing device, the gain-adjusted digital audio samples to a remotely-located media player to generate sound based on the gain-adjusted digital audio samples.
  • 11. The method of claim 10, further comprising: comparing the loudness estimate to a silence threshold value; andapplying a multiplicative gain value of one to the current block of digital audio samples when the loudness estimate is less than the silence threshold value.
  • 12. The method of claim 10, wherein calculating the loudness estimate is performed in accordance with the expression
  • 13. The method of claim 12, wherein calculating the reference gain value is performed in accordance with the expression
  • 14. The method of claim 13, wherein: calculating the maximum gain value is performed in accordance with the expression
  • 15. The method of claim 14, wherein the selecting is performed in accordance with the expression gk=min (w1gk-1+w2ge, gm), where gk is the selected current gain value, and w1 and w2 are weighting factors.
  • 16. The method of claim 10, wherein modifying the current block of digital audio samples is performed for a predetermined time period after initializing processing of the media stream.
  • 17. The method of claim 10, wherein modifying the current block of digital audio samples is performed for a predetermined number of blocks of the digital audio samples after initializing processing of the media stream.
  • 18. The method of claim 10, wherein: one digital audio sample is represented by sixteen bits; andone block of digital audio samples is represented by 1024 digital audio samples.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a divisional of U.S. patent application Ser. No. 12/507,971, filed Jul. 23, 2009, and issued on Mar. 26, 2013 as U.S. Pat. No. 8,406,431.

US Referenced Citations (332)
Number Name Date Kind
3416043 Jorgensen Dec 1968 A
4254303 Takizawa Mar 1981 A
5161021 Tsai Nov 1992 A
5237648 Mills et al. Aug 1993 A
5386493 Degen et al. Jan 1995 A
5434590 Dinwiddie, Jr. et al. Jul 1995 A
5493638 Hooper et al. Feb 1996 A
5602589 Vishwanath et al. Feb 1997 A
5661516 Carles Aug 1997 A
5666426 Helms Sep 1997 A
5682195 Hendricks et al. Oct 1997 A
5706290 Shaw et al. Jan 1998 A
5708961 Hylton et al. Jan 1998 A
5710605 Nelson Jan 1998 A
5722041 Freadman Feb 1998 A
5757416 Birch et al. May 1998 A
5764689 Walley Jun 1998 A
5774170 Hite et al. Jun 1998 A
5778077 Davidson Jul 1998 A
5794116 Matsuda et al. Aug 1998 A
5822537 Katseff et al. Oct 1998 A
5831664 Wharton et al. Nov 1998 A
5850482 Meany et al. Dec 1998 A
5852437 Wugofski et al. Dec 1998 A
5880721 Yen Mar 1999 A
5889506 Lopresti et al. Mar 1999 A
5898679 Brederveld et al. Apr 1999 A
5909518 Chui Jun 1999 A
5911582 Redford et al. Jun 1999 A
5922072 Hutchinson et al. Jul 1999 A
5936968 Lyons Aug 1999 A
5968132 Tokunaga Oct 1999 A
5987501 Hamilton et al. Nov 1999 A
6002450 Darbee et al. Dec 1999 A
6008777 Yiu Dec 1999 A
6014694 Aharoni et al. Jan 2000 A
6020880 Naimpally Feb 2000 A
6031940 Chui et al. Feb 2000 A
6036601 Heckel Mar 2000 A
6040829 Croy et al. Mar 2000 A
6043837 Driscoll, Jr. et al. Mar 2000 A
6049671 Slivka et al. Apr 2000 A
6075906 Fenwick et al. Jun 2000 A
6088777 Sorber Jul 2000 A
6097441 Allport Aug 2000 A
6104334 Allport Aug 2000 A
6108041 Faroudja et al. Aug 2000 A
6115420 Wang Sep 2000 A
6117126 Appelbaum et al. Sep 2000 A
6141059 Boyce et al. Oct 2000 A
6141447 Linzer et al. Oct 2000 A
6160544 Hayashi et al. Dec 2000 A
6201536 Hendricks et al. Mar 2001 B1
6212282 Mershon Apr 2001 B1
6222885 Chaddha et al. Apr 2001 B1
6223211 Hamilton et al. Apr 2001 B1
6240459 Roberts et al. May 2001 B1
6240531 Spilo et al. May 2001 B1
6243596 Kikinis Jun 2001 B1
6256019 Allport Jul 2001 B1
6263503 Margulis Jul 2001 B1
6279029 Sampat et al. Aug 2001 B1
6282714 Ghori et al. Aug 2001 B1
6286142 Ehreth Sep 2001 B1
6310886 Barton Oct 2001 B1
6340994 Margulis et al. Jan 2002 B1
6353885 Herzi et al. Mar 2002 B1
6356945 Shaw et al. Mar 2002 B1
6357021 Kitagawa et al. Mar 2002 B1
6370688 Hejna, Jr. Apr 2002 B1
6389467 Eyal May 2002 B1
6434113 Gubbi Aug 2002 B1
6442067 Chawla et al. Aug 2002 B1
6456340 Margulis Sep 2002 B1
6466623 Youn et al. Oct 2002 B1
6470378 Tracton et al. Oct 2002 B1
6476826 Plotkin et al. Nov 2002 B1
6487319 Chai Nov 2002 B1
6493874 Humpleman Dec 2002 B2
6496122 Sampsell Dec 2002 B2
6505169 Bhagavath et al. Jan 2003 B1
6510177 De Bonet et al. Jan 2003 B1
6529506 Yamamoto et al. Mar 2003 B1
6553147 Chai et al. Apr 2003 B2
6557031 Mimura et al. Apr 2003 B1
6563931 Soli et al. May 2003 B1
6564004 Kadono May 2003 B1
6567984 Allport May 2003 B1
6584201 Konstantinou et al. Jun 2003 B1
6584559 Huh et al. Jun 2003 B1
6597375 Yawitz Jul 2003 B1
6598159 McAlister et al. Jul 2003 B1
6600838 Chui Jul 2003 B2
6609253 Swix et al. Aug 2003 B1
6611530 Apostolopoulos Aug 2003 B1
6628716 Tan et al. Sep 2003 B1
6642939 Vallone et al. Nov 2003 B1
6647015 Malkemes et al. Nov 2003 B2
6658019 Chen et al. Dec 2003 B1
6665751 Chen et al. Dec 2003 B1
6665813 Forsman et al. Dec 2003 B1
6668261 Basso et al. Dec 2003 B1
6697356 Kretschmer et al. Feb 2004 B1
6701380 Schneider et al. Mar 2004 B2
6704678 Minke et al. Mar 2004 B2
6704847 Six et al. Mar 2004 B1
6708231 Kitagawa Mar 2004 B1
6718551 Swix et al. Apr 2004 B1
6748092 Baekgaard Jun 2004 B1
6754266 Bahl et al. Jun 2004 B2
6754439 Hensley et al. Jun 2004 B1
6757851 Park et al. Jun 2004 B1
6757906 Look et al. Jun 2004 B1
6766376 Price Jul 2004 B2
6768775 Wen et al. Jul 2004 B1
6771828 Malvar Aug 2004 B1
6774912 Ahmed et al. Aug 2004 B1
6781601 Cheung Aug 2004 B2
6785700 Masud et al. Aug 2004 B2
6795638 Skelley, Jr. Sep 2004 B1
6798838 Ngo Sep 2004 B1
6806909 Radha et al. Oct 2004 B1
6807308 Chui et al. Oct 2004 B2
6816194 Zhang et al. Nov 2004 B2
6816858 Coden et al. Nov 2004 B1
6826242 Ojard et al. Nov 2004 B2
6834123 Acharya et al. Dec 2004 B2
6839079 Barlow et al. Jan 2005 B2
6847468 Ferriere Jan 2005 B2
6850571 Tardif Feb 2005 B2
6850649 Malvar Feb 2005 B1
6868083 Apostolopoulos et al. Mar 2005 B2
6889385 Rakib et al. May 2005 B1
6892359 Nason et al. May 2005 B1
6898583 Rising, III May 2005 B1
6907602 Tsai et al. Jun 2005 B2
6927685 Wathen Aug 2005 B2
6930661 Uchida et al. Aug 2005 B2
6941575 Allen Sep 2005 B2
6944880 Allen Sep 2005 B1
6952595 Ikedo et al. Oct 2005 B2
6981050 Tobias et al. Dec 2005 B1
7016337 Wu et al. Mar 2006 B1
7020892 Levesque et al. Mar 2006 B2
7032000 Tripp Apr 2006 B2
7047305 Brooks et al. May 2006 B1
7110558 Elliott Sep 2006 B1
7124366 Foreman et al. Oct 2006 B2
7151575 Landry et al. Dec 2006 B1
7155734 Shimomura et al. Dec 2006 B1
7155735 Ngo et al. Dec 2006 B1
7184433 Oz Feb 2007 B1
7224323 Uchida et al. May 2007 B2
7239800 Bilbrey Jul 2007 B2
7344084 DaCosta Mar 2008 B2
7430686 Wang et al. Sep 2008 B1
7464396 Hejna, Jr. Dec 2008 B2
7502733 Andrsen et al. Mar 2009 B2
7505480 Zhang et al. Mar 2009 B1
7565681 Ngo et al. Jul 2009 B2
7647614 Krikorian et al. Jan 2010 B2
7702952 Tarra et al. Apr 2010 B2
7707614 Krikorian et al. Apr 2010 B2
7725912 Margulis May 2010 B2
7769756 Krikorian et al. Aug 2010 B2
7778408 McCree et al. Aug 2010 B2
7917932 Krikorian Mar 2011 B2
7975062 Krikorian et al. Jul 2011 B2
7992176 Margulis et al. Aug 2011 B2
8041988 Tarra et al. Oct 2011 B2
8051454 Krikorian et al. Nov 2011 B2
8060609 Banger et al. Nov 2011 B2
8099755 Bajpai et al. Jan 2012 B2
8149851 Asnis et al. Apr 2012 B2
8169914 Bajpai et al. May 2012 B2
8171148 Lucas et al. May 2012 B2
8266657 Margulis Sep 2012 B2
8314893 Ravi Nov 2012 B2
8350971 Malone et al. Jan 2013 B2
20010021998 Margulis Sep 2001 A1
20020004839 Wine et al. Jan 2002 A1
20020010925 Kikinis Jan 2002 A1
20020012530 Bruls Jan 2002 A1
20020031333 Mano et al. Mar 2002 A1
20020046404 Mizutani Apr 2002 A1
20020053053 Nagai et al. May 2002 A1
20020080753 Lee Jun 2002 A1
20020085725 Bizjak Jul 2002 A1
20020090029 Kim Jul 2002 A1
20020105529 Bowser et al. Aug 2002 A1
20020112247 Horner et al. Aug 2002 A1
20020122137 Chen et al. Sep 2002 A1
20020131497 Jang Sep 2002 A1
20020138843 Samaan et al. Sep 2002 A1
20020143973 Price Oct 2002 A1
20020147634 Jacoby et al. Oct 2002 A1
20020147687 Breiter et al. Oct 2002 A1
20020167458 Baudisch et al. Nov 2002 A1
20020173864 Smith Nov 2002 A1
20020188818 Nimura et al. Dec 2002 A1
20020191575 Kalavade et al. Dec 2002 A1
20030001880 Holtz et al. Jan 2003 A1
20030028873 Lemmons Feb 2003 A1
20030055635 Bizjak Mar 2003 A1
20030065915 Yu et al. Apr 2003 A1
20030093260 Dagtas et al. May 2003 A1
20030095791 Barton et al. May 2003 A1
20030115167 Sharif Jun 2003 A1
20030159143 Chan Aug 2003 A1
20030187657 Erhart et al. Oct 2003 A1
20030192054 Birks et al. Oct 2003 A1
20030208612 Harris et al. Nov 2003 A1
20030231621 Gubbi et al. Dec 2003 A1
20040003406 Billmaier Jan 2004 A1
20040032916 Takashima Feb 2004 A1
20040052216 Roh Mar 2004 A1
20040068334 Tsai et al. Apr 2004 A1
20040083301 Murase et al. Apr 2004 A1
20040100486 Flamini et al. May 2004 A1
20040103340 Sundareson et al. May 2004 A1
20040139047 Rechsteiner et al. Jul 2004 A1
20040162845 Kim et al. Aug 2004 A1
20040162903 Oh Aug 2004 A1
20040172410 Shimojima et al. Sep 2004 A1
20040205830 Kaneko Oct 2004 A1
20040212640 Mann et al. Oct 2004 A1
20040216173 Horoszowski et al. Oct 2004 A1
20040236844 Kocherlakota Nov 2004 A1
20040255249 Chang et al. Dec 2004 A1
20050021398 McCleskey et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050027821 Alexander et al. Feb 2005 A1
20050038981 Connor et al. Feb 2005 A1
20050044058 Matthews et al. Feb 2005 A1
20050050462 Whittle et al. Mar 2005 A1
20050053356 Mate et al. Mar 2005 A1
20050055595 Frazer et al. Mar 2005 A1
20050060759 Rowe et al. Mar 2005 A1
20050097542 Lee May 2005 A1
20050114852 Chen et al. May 2005 A1
20050132351 Randall et al. Jun 2005 A1
20050138560 Lee et al. Jun 2005 A1
20050198584 Matthews et al. Sep 2005 A1
20050204046 Watanabe Sep 2005 A1
20050216851 Hull et al. Sep 2005 A1
20050223087 Van Der Stok Oct 2005 A1
20050227621 Katoh Oct 2005 A1
20050229118 Chiu et al. Oct 2005 A1
20050246369 Oreizy et al. Nov 2005 A1
20050251833 Schedivy Nov 2005 A1
20050283791 McCarthy et al. Dec 2005 A1
20050288999 Lerner et al. Dec 2005 A1
20060011371 Fahey Jan 2006 A1
20060031381 Van Luijt et al. Feb 2006 A1
20060050970 Gunatilake Mar 2006 A1
20060051055 Ohkawa Mar 2006 A1
20060095401 Krikorian et al. May 2006 A1
20060095471 Krikorian et al. May 2006 A1
20060095472 Krikorian et al. May 2006 A1
20060095942 Van Beek May 2006 A1
20060095943 Demircin et al. May 2006 A1
20060107226 Matthews et al. May 2006 A1
20060117371 Margulis Jun 2006 A1
20060146174 Hagino Jul 2006 A1
20060206581 Howarth et al. Sep 2006 A1
20060280157 Karaoguz et al. Dec 2006 A1
20070003224 Krikorian et al. Jan 2007 A1
20070005783 Saint-Hillaire et al. Jan 2007 A1
20070022328 Tarra et al. Jan 2007 A1
20070053446 Spilo Mar 2007 A1
20070074115 Patten et al. Mar 2007 A1
20070076604 Litwack Apr 2007 A1
20070097257 El-MaIeh et al. May 2007 A1
20070168543 Krikorian et al. Jul 2007 A1
20070180485 Dua Aug 2007 A1
20070198532 Krikorian et al. Aug 2007 A1
20070234213 Krikorian et al. Oct 2007 A1
20070286596 Lonn Dec 2007 A1
20070290876 Sato et al. Dec 2007 A1
20070300252 Acharya et al. Dec 2007 A1
20080019276 Takatsuji et al. Jan 2008 A1
20080037573 Cohen Feb 2008 A1
20080059533 Krikorian Mar 2008 A1
20080134267 Moghe et al. Jun 2008 A1
20080195744 Bowra et al. Aug 2008 A1
20080199150 Candelore Aug 2008 A1
20080225176 Selby Sep 2008 A1
20080256485 Krikorian Oct 2008 A1
20080294759 Biswas et al. Nov 2008 A1
20080307456 Beetcher et al. Dec 2008 A1
20080307462 Beetcher et al. Dec 2008 A1
20080307463 Beetcher et al. Dec 2008 A1
20090074380 Boston et al. Mar 2009 A1
20090080448 Tarra et al. Mar 2009 A1
20090157697 Rao et al. Jun 2009 A1
20090199248 Ngo et al. Aug 2009 A1
20090252219 Chen et al. Oct 2009 A1
20090252347 Kakkeri Oct 2009 A1
20090281805 LeBlanc Nov 2009 A1
20100001960 Williams Jan 2010 A1
20100005483 Rao Jan 2010 A1
20100064055 Krikorian et al. Mar 2010 A1
20100064332 Krikorian et al. Mar 2010 A1
20100070925 Einaudi et al. Mar 2010 A1
20100071076 Gangotri et al. Mar 2010 A1
20100100915 Krikorian et al. Apr 2010 A1
20100129057 Kulkarni May 2010 A1
20100146527 Craib et al. Jun 2010 A1
20100192184 Margulis et al. Jul 2010 A1
20100192185 Margulis et al. Jul 2010 A1
20100192188 Rao Jul 2010 A1
20100232438 Bajpia et al. Sep 2010 A1
20110019839 Nandury Jan 2011 A1
20110032986 Banger et al. Feb 2011 A1
20110033168 Iyer Feb 2011 A1
20110035462 Akella Feb 2011 A1
20110035466 Panigrahi Feb 2011 A1
20110035467 Thiyagarajan et al. Feb 2011 A1
20110035668 Thiyagarajan Feb 2011 A1
20110035669 Shirali et al. Feb 2011 A1
20110035741 Thiyagarajan Feb 2011 A1
20110035764 Shirali Feb 2011 A1
20110055864 Shah et al. Mar 2011 A1
20110113354 Thiyagarajan et al. May 2011 A1
20110119325 Paul et al. May 2011 A1
20110138435 Poli et al. Jun 2011 A1
20110150432 Paul et al. Jun 2011 A1
20110153718 Dham et al. Jun 2011 A1
20110153845 Rao et al. Jun 2011 A1
20110158610 Paul et al. Jun 2011 A1
20110191456 Jain Aug 2011 A1
20110208506 Gurzhi et al. Aug 2011 A1
Foreign Referenced Citations (27)
Number Date Country
1464685 Dec 2003 CN
101110217 Jan 2008 CN
4407319 Sep 1994 DE
1443766 Aug 2004 EP
2307151 May 1997 GB
2003186006 Aug 1991 JP
2008088525 Apr 1996 JP
2000066671 Mar 2000 JP
2003046582 Feb 2003 JP
2003114845 Apr 2003 JP
2006072142 Mar 2006 JP
2006129234 May 2006 JP
2008244820 Oct 2008 JP
2008278231 Nov 2008 JP
19990082855 Nov 1999 KR
20010211410 Aug 2001 KR
0133839 May 2001 WO
0147248 Jun 2001 WO
0193161 Dec 2001 WO
03052552 Jun 2003 WO
2004032511 Apr 2004 WO
2006074110 Jul 2006 WO
2006099530 Sep 2006 WO
2008024723 Feb 2008 WO
2008048599 Apr 2008 WO
2008051347 May 2008 WO
2008070422 Jun 2008 WO
Non-Patent Literature Citations (72)
Entry
Japan Patent Office, “Notice of Rejection Ground” mailed May 21, 2013 for Japanese Patent Application No. 2012-521672.
China State Intellectual Property Office, First Office Action mailed Nov. 5, 2013 for Chinese Patent Application No. 201080037093.5.
European Patent Office, Examination Report, dated Apr. 25, 2014 for European Application No. 10737696.4.
Canadian Intellectual Property Office, Office Action, dated Oct. 6, 2014 for Canadian Patent Application No. 2,768,775.
Japanese Office Action issued Apr. 14, 2015 in application No. 2012-521672.
Intellectual Property Office of Singapore “Search and Examination Report” dated Dec. 18, 2013 for Singapore Appln. No. 201200485-9.
International Search Report and Written Opinion, PCT/US2005/020105, Feb. 15, 2007, 6 pages.
International Search Report and Written Opinion for PCT/US2006/04382, mailed Apr. 27, 2007.
Archive of “TV Brick Home Server,” www.tvbrick.com, [online] [Archived by http://archive.org on Jun. 3, 2004; Retrieved on Apr. 12, 2006] retrieved from the Internet <URL:http://web.archive.org/web/20041107111024/www.tvbrick.com/en/affi- liate/tvbs/tvbrick/document18/print>.
Faucon, B. “TV ‘Brick’ Opens up Copyright Can of Worms,” Financial Review, Jul. 1, 2003, [online [Retrieved on Apr. 12, 2006] Retrieved from the Internet, URL:http://afr.com/cgi-bin/newtextversions.pl?storyid+1056825330084&3ate+2003/07/01&pagetype+printer&section+1053801318705&path+articles/2003/06/30/0156825330084.html.].
Balster, Eric J., “Video Compression and Rate Control Methods Based on the Wavelet Transform,” The Ohio State University 2004, pp. 1-24.
Kulapala et al., “Comparison of Traffic and Quality Characteristics of Rate-Controlled Wavelet and DCT Video,” Arizona State University, Oct. 11, 2004.
Skodras et al., “JPEG2000: The Upcoming Still Image Compression Standard,” May 11, 2000, 14 pages.
Taubman et al., “Embedded Block Coding in JPEG2000,” Feb. 23, 2001, pp. 1-8 of 36.
Kessler, Gary C., An Overview of TCP/IP Protocols and the Internet; Jan. 16, 2007, retrieved from the Internet on Jun. 12, 2008 at http://www.garykessler.net/library/tcpip.html; originally submitted to the InterNIC and posted on their Gopher site on Aug. 5, 1994.
Roe, Kevin, “Third-Party Observation Under EPC Article 115 on the Patentability of an Invention,” Dec. 21, 2007.
Roe, Kevin, Third-Party Submission for Published Application Under CFR §1.99, Mar. 26, 2008.
International Search Report and Written Opinion for International Application No. PCT/US2006/025911, mailed Jan. 3, 2007.
International Search Report for International Application No. PCT/US2007/063599, mailed Dec. 12, 2007.
International Search Report for International Application No. PCT/US2007/076337, mailed Oct. 20, 2008.
International Search Report and Written Opinion for International Application No. PCT/US2006/025912, mailed Jul. 17, 2008.
International Search Report for International Application No. PCT/US2008/059613, mailed Jul. 21, 2008.
International Search Report and Written Opinion for International Application No. PCT/US2008/080910, mailed Feb. 16, 2009.
Wikipedia “Slingbox” [Online], Oct. 21, 2007, XP002512399; retrieved from the Internet: <URL:http://en.wikipedia.org/w/index.php?title=Slingbox&oldid=166080570>; retrieved on Jan. 28, 2009.
Wikipedia “LocationFree Player” [Online], Sep. 22, 2007, XP002512400; retrieved from the Internet: <URL: http://en.wikipedia.org/w/index.php?title=LocationFree—Player&oldid=159683564>; retrieved on Jan. 28, 2009.
Capable Networks LLC “Keyspan Remote Control—Controlling Your Computer With a Remote” [Online], Feb. 21, 2006, XP002512495; retrieved from the Internet: <URL:http://www.slingcommunity.com/article/11791/Keyspan-Remote-Control---Controlling-Your-Computer-With-a-Remot-?highlight=remote+control>; retrieved on Jan. 28, 2009.
Sling Media Inc. “Slingbox User Guide” [Online] 2006, XP002512553; retrieved from the Internet: <URL:http://www.slingmedia.hk/attach/en-US—Slingbox—User—Guide—v12.pdf>; retrieved on Jan. 29, 2009.
Sony Corporation “LocationFree TV” [Online], 2004, SP002512410; retrieved from the Internet: <URL:http://www.docs.sony.com/release/LFX1—X5revision.pdf>; retrieved on Jan. 28, 2009 [note—document uploaded in two parts as file exceeds the 25MB size limit].
Sony Corporation “LocationFree Player Pak—LocationFree Base Station—LocationFree Player” [Online] 2005, XP002512401; retrieved from the Internet: <URL:http://www.docs.sony.com/release/LFPK1.pdf>; retrieved on Jan. 28, 2009.
European Patent Office, European Search Report for European Application No. EP 08 16 7880, mailed Mar. 4, 2009.
Mythtv Wiki, “MythTV User Manual” [Online], Aug. 27, 2007, XP002515046; retrieved from the Internet: <URL: http://www.mythtv.org/wiki?title=User—Manual:Introduction&oldid=25549>.
International Searching Authority, Written Opinion and International Search Report for International Application No. PCT/US2008/077733, mailed Mar. 18, 2009.
International Searching Authority, Written Opinion and International Search Report for International Application No. PCT/US2008/087005, mailed Mar. 20, 2009.
Watanabe Y. et al., “Multimedia Database System for TV Newscasts and Newspapers”; Lecture Notes in Computer Science, Springer Verlag, Berlin, Germany; vol. 1554, Nov. 1, 1998, pp. 208-220, XP002402824, ISSN: 0302-9743.
Yasuhiko Watanabe et al., “Aligning Articles in TV Newscasts and Newspapers”; Proceedings of the International Conference on Computationallinguistics, XX, XX, Jan. 1, 1998, pp. 1381-1387, XP002402825.
Sodergard C. et al., “Integrated Multimedia Publishing: Combining TV and Newspaper Content on Personal Channels”; Computer Networks, Elsevier Science Publishers B.V., Amsterdam, Netherlands; vol. 31, No. 11-16, May 17, 1999, pp. 1111-1128, XP004304543, ISSN: 1389-1286.
Ariki Y. et al., “Automatic Classification of TV News Articles Based on Telop Character Recognition”; Multimedia Computing and Systems, 1999; IEEE International Conference on Florence, Italy, Jun. 7-11, 1999, Los Alamitos, California, USA, IEEE Comput. Soc. US; vol. 2, Jun. 7, 1999, pp. 148-152, XP010519373, ISBN: 978-0-7695-0253-3; abstract, paragraph [03.1], paragraph [052], figures 1,2.
Sonic Blue “ReplayTV 5000 User's Guide,” 2002, entire document.
Microsoft Corporation; Harman/Kardon “Master Your Universe” 1999.
Matsushita Electric Corporation of America MicroCast : Wireless PC Multimedia Transceiver System, Nov. 1998.
“Wireless Local Area Networks: Issues in Technology and Standards” Jan. 6, 1999.
China State Intellectual Property Office “First Office Action,” issued Jul. 31, 2009, for Application No. 200580026825.X.
European Patent Office, European Search Report, mailed Sep. 28, 2009 for European Application No. EP 06 78 6175.
European Patent Office, International Searching Authority, “International Search Report,” for International Application No. PCT/US2009/049006, mailed Sep. 11, 2009.
International Search Report for PCT/US2008/069914 mailed Dec. 19, 2008.
PCT Partial International Search, PCT/US2009/054893, mailed Dec. 23, 2009.
Newton's Telecom Dictionary, 21st ed., Mar. 2005.
Ditze M. et all “Resource Adaptation for Audio-Visual Devices in the UPnP QoS Architecture,” Advanced Networking and Applications, 2006; AINA, 2006; 20% H International conference on Vienna, Austria Apr. 18-20, 2006.
Joonbok, Lee et al. “Compressed High Definition Television (HDTV) Over IPv6,” Applications and the Internet Workshops, 2006; Saint Workshops, 2006; International Symposium, Phoenix, AZ, USA, Jan. 23-27, 2006.
Lowekamp, B. et al. “A Hierarchy of Network Performance Characteristics for Grid Applications and Services,” GGF Network Measurements Working Group, pp. 1-29, May 24, 2004.
Meyer, Derrick “MyReplayTV™ Creates First-Ever Online Portal to Personal TI! Service; Gives Viewers Whole New Way to Interact With Programming,” http://web.archive.org/web/20000815052751/http://www.myreplaytv.com/, Aug. 15, 2000.
Sling Media “Sling Media Unveils Top-of-Line Slingbox PRO-HD” [online], Jan. 4, 2008, XP002560049; retrieved from the Internet: URL:www.slingmedia.com/get/pr-slingbox-pro-hd.html; retrieved on Oct. 12, 2009.
Srisuresh, P. et al. “Traditional IP Network Address Translator (Traditional NAT),” Network Working Group, The Internet Society, Jan. 2001.
China State Intellectual Property Office “First Office Action,” issued Jan. 8, 2010, for Application No. 200810126554.0.
Australian Government “Office Action,” Australian Patent Application No. 2006240518, mailed Nov. 12, 2009.
Newton's Telcom Dictionary, 20th ed., Mar. 2004.
“The Authoritative Dictionary of IEEE Standard Terms,” 7th ed. 2000.
European Patent Office, International Searching Authority, “International Search Report,” mailed Mar. 30, 2010; International Application PCT/US2009/068468 filed Dec. 27, 2009.
Qiong, Liu et al. “Digital Rights Management for Content Distribution,” Proceedings of the Australasian Information Security Workshop Conference on ACSW Frontiers 2003, vol. 21, 2003, XP002571073, Adelaide, Australia, ISSN: 1445-1336, ISBN: 1-920682-00-7, sections 2 and 2.1.1.
China State Intellectual Property Office “Office Action” issued Mar. 18, 2010 for Application No. 200680022520.6.
China State Intellectual Property Office “Office Action” issued Apr. 13, 2010 for Application No. 200580026825.X.
Canadian Intellectual Property Office “Office Action” mailed Feb. 18, 2010 for Application No. 2569610.
European Patent Office “European Search Report,” mailed May 7, 2010 for Application No. 06786174.0.
European Patent Office, International Searching Authority, “International Search Report and Written Opinion,” mailed Jun. 4, 2010 for International Application No. PCT/IN2009/000728, filed Dec. 18, 2009.
Lee, M. et al. “Video Frame Rate Control for Non-Guaranteed Network Services with Explicit Rate Feedback,” Globecom'00, 2000 IEEE Global Telecommunications conference, San Francisco, CA, Nov. 27-Dec. 1, 2000; [IEEE Global Telecommunications Conference], New York, NY; IEEE, US, vol. 1, Nov. 27, 2000, pp. 293-297, XP001195580; ISBN: 978-0-7803-6452-3, lines 15-20 of sec. II on p. 293, fig. 1.
Korean Intellectual Property Office “Official Notice of Preliminary Rejection,” issued Jun. 18, 2010; Korean Patent Application No. 10-2008-7021254.
Japan Patent Office “Notice of Grounds for Rejection (Office Action),” mailed May 25, 2010; Patent Application No. 2007-0268269.
Japan Patent Office “Notice of Grounds for Rejection (Office Action),” mailed May 25, 2010; Patent Application No. 2007-527683.
European Patent Office, International Searchnig Authority, “International Search Report” mailed Sep. 7, 2010; International Application No. PCT/US2010/041680, filed Jul. 12, 2010.
USPTO “Non-Final Office Action” mailed Jan. 25, 2012 for U.S. Appl. No. 12/507,971, filed Jul. 23, 2009.
USPTO “Non-Final Office Action” mailed Aug. 1, 2012 for U.S. Appl. No. 12/507,971, filed Jul. 23, 2009.
USPTO “Notice of Allowance” mailed Nov. 26, 2012 for U.S. Appl. No. 12/507,971, filed Jul. 23, 2009.
Related Publications (1)
Number Date Country
20130208918 A1 Aug 2013 US
Divisions (1)
Number Date Country
Parent 12507971 Jul 2009 US
Child 13848535 US