This disclosure relates generally to media presentation, and, more particularly, to methods and apparatus to perform speed-enhanced playback of recorded media.
Traditionally, broadcast programs have been consumed via a media presentation device at which the broadcast programs are received. However, more recently, recording devices such as set-top boxes (STB), digital video recorders (DVR), personal video recorders (PVR), game consoles, etc., which permit media to be recorded and replayed in accordance with the desires of individual audience members, have become commonplace.
Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
In general, the example apparatus and methods described herein may be used to perform speed-enhanced playback of recorded media. Examples disclosed herein include a digital recording/playback device at a consumption site that allows a consumer to playback media at a faster rate than the rate at which the media was originally recorded or broadcast. For example, the example digital recording/playback device disclosed herein may enable a consumer to playback media at a rate thirty percent faster (e.g., 1.3×) than the original playback rate (e.g., 1.0×).
Examples disclosed herein facilitate speeding-up playback of previously recorded media. For example, disclosed techniques include dropping audio frames, and corresponding video frames, having a number of skip bytes that satisfy a target threshold. As used herein, an “audio frame” refers to a block of data representing time-domain audio in a compressed format. In some examples, the target threshold number of skip bytes is computed to drop a target percentage of audio frames by evaluating the number of skip bytes of the audio frames of the recorded media. Audio frames with a qualifying number of skip bytes (e.g., that satisfy a skip bytes threshold) are identified as candidate frames for dropping that may be dropped during playback to perform speed-enhanced playback. In some examples, the number of audio frames that are dropped depends on the target playback rate. For example, if recorded media includes 256 audio frames, and a playback rate thirty percent faster (e.g., 1.3×) than the original playback rate (e.g., 1.0×) is desired, examples disclosed herein select approximately 61 audio frames to drop to facilitate a playback rate of 1.3×.
Digital broadcast systems typically transmit one or more high-bandwidth signals, each of which is typically composed of a stream of data or data packets having a plurality of video, audio and/or other digital programs or content multiplexed therein. A number of well-known data compression techniques (e.g., audio/video content compression techniques), transmission protocols and the like are typically employed to generate and transmit a multi-program data stream or bit stream, which is commonly referred to as a transport stream. In particular, digital television programming is typically transmitted according to a standard promulgated by the Advanced Television Standards Committee (ATSC). The ATSC standard is a comprehensive standard relating to the conveyance of digital television signals. Under the ATSC standard, video information associated with a program is encoded and compressed according to the well-known Moving Pictures Expert Group-2 (MPEG-2) standard and audio information associated with the program is encoded and compressed according to the well-known AC-3 standard. As a result, an ATSC data stream or bit stream contains video information in the form of MPEG-2 packets and audio information in the form of AC-3 packets. However, other digital transmission protocols, data compression schemes and the like may be used instead.
As described above, the disclosed digital broadcast streams use AC-3 packets to communicate audio information. The AC-3 packets contain the actual audio data and headers containing additional information such as synchronization information, timing information and sampling rate information. The AC-3 data stream is organized into frames and each frame contains sufficient data to reconstruct audio corresponding approximately to the duration of a video frame. Each frame has a fixed size in terms of total number of 16-bit words. During the rendering (e.g., playback) process, each audio frame is time-aligned with its corresponding video frame to achieve lip synchronization.
In the case of 48 KHz-sampled audio with 16-bits per sample, each AC-3 frame represents 6 blocks of compressed audio. In the illustrated example, each block is derived from 256 time-domain samples per channel. The number of channels can vary between 1 (e.g., monophonic audio) to 6 (e.g., “5.1 channel surround sound”). A multi-pass algorithm attempts to compress the data from each 256-sample block of audio in order to minimize the number of bits required to represent it. Since the frame size is fixed, at the end of the optimization process, one or more bytes may be available as “surplus.” In the bit stream, the presence of the “surplus” bits is identified by a “skip length exists” flag (e.g., “SKIPLE”). If the SKIPLE bit is a “1,” then dummy bytes are included in the frame. The frame also includes a 9-bit “skip length” code (e.g., “SKIPL”) that defines the number of bytes to skip at this point in the stream during playback. The skip bytes are generally inserted immediately after the compressed representations of a block of audio in the form of Modified Discrete Cosine Transform (MDCT) coefficients. Examples disclosed herein parse an AC-3 frame to identify the total number of skip bytes in the data stream by locating the “skip length exists” flags and the “skip length” codes. In the illustrated example, each AC-3 frame, when uncompressed by an AC-3 decoder, will yield 1536 time-domain samples for each channel. The playback duration of such a frame is 32 milliseconds.
The example digital recording/playback device 104 of
The example media database 110 may be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). The example media database 110 may additionally or alternatively be implemented by one or more double data rate (DDR) memories, such as DDR, DDR2, DDR3, mobile DDR (mDDR), etc. The example media database 110 may additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), etc. While in the illustrated example the media database 110 is illustrated as a single database, the media database 110 may be implemented by any number and/or type(s) of databases.
As described in detail below in connection with
Although examples disclosed herein are directed to television media broadcast using the ATSC standard, the techniques disclosed herein may be additionally or alternatively be applied to other forms of digital media that includes audio information in the form of AC-3 packets. For example, the disclosed techniques may be used to perform speed-enhanced playback of a podcast. Furthermore, the disclosed techniques may additionally or alternatively be used on digital media that includes audio information that includes skip bytes that are located in known positions in the audio packets.
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example, the target frames calculator 215 receives the total number of audio frames (framestotal) from the audio parser 205. In Equation 1 above, the target playback rate (ratetarget) is a default (e.g., preset) playback rate (e.g., 1.3×). However, in additional or alternate implementations, the target playback rate may be a user-provided playback rate.
In some examples, the target frames calculator 215 may additionally or alternatively calculate the target number of frames to drop (framestarget) in a “chunk” of audio rather than across the entire media. For example, the target frames calculator 215 may calculate the target number of frames to drop (framestarget) based on a subset number of frames of the media (e.g., 1,000 audio frames corresponding to 32 seconds of audio). For example, by applying Equation 1 above, the example target frames calculator 215 may calculate the target number of frames to drop (framestarget) in a 1,000 audio frame “chunk” is 330 frames to achieve a target playback rate (ratetarget) of 1.3×.
In the illustrated example of
In some examples, the frames selector 220 selects frames for dropping in “chunks” of audio rather than across the entire media. For example, the frames selector 220 may select 330 for dropping in a 1,000 frame “chunk” of audio to achieve a target playback rate (ratetarget) of 1.3× across the 1,000 frame “chunk.”
In the illustrated example of
In some examples, audio discontinuities may become apparent to a consumer due to audio frames being dropped by the media renderer 225 during media playback. For example, two audio frames that are non-consecutive during playback of media at the original playback rate (e.g., 1.0×) may be rendered by the media renderer 225 as consecutive audio frames during speed-enhanced playback of the media (e.g., at 1.3×). For example, during speed-enhanced playback of media including audio frames 1-3, the example media renderer 225 may drop audio frame 2, thereby resulting in the media renderer 225 rendering audio frame 3 after rendering audio frame 1. In the illustrated example of
While an example manner of implementing the digital recording/playback device 104 of
Flowcharts representative of example machine readable instructions for implementing the example digital recording/playback device 104 of
As mentioned above, the example processes of
At block 408, the example speed enhancer 112 determines whether the selected audio frame is a candidate frame for dropping. For example, the example candidate identifier 210 may compare the number of skip bytes included in the selected audio frame to a skip bytes threshold. If, at block 410, the candidate identifier 210 determines that the number of skip bytes included in the selected audio frame does not satisfy the skip bytes threshold, then control proceeds to block 414 to determine whether there is another audio frame to process.
If, at block 410, the candidate identifier 210 determines that the number of skip bytes included in the selected audio frame satisfies the skip bytes threshold, then, at block 412, the example candidate identifier 210 determines that the selected audio frame is a candidate frame for dropping. For example, the candidate identifier 210 records a frame identifier representative of the selected audio frame and the number of skip bytes included in the selected audio frame in the example data structure 212.
At block 414, the example audio parser 205 determines whether there is another audio frame to process. If, at block 414, the audio parser 205 determines that there is another audio frame to process, control returns to block 404 to select the next audio frame to process.
If, at block 414, the audio parser 205 determines that there is not another audio frame to process, then, at block 416, the example speed enhancer 112 sorts the candidate frames included in the data structure 212. For example, the candidate identifier 210 may sort the candidate frames included in the data structure 212 based on the number of skip bytes included in the respective candidate frames.
At block 418, the example speed enhancer 112 calculates the maximum playback rate for the selected media. For example, the example candidate identifier 210 may determine the maximum playback rate for the selected media based on the total number of audio frames included in the media (framestotal) and the number of candidate frames for dropping included in the data structure 212 for the selected media.
At block 420, the example speed enhancer 112 determines whether there is additional recorded media in the media database 160 to process. If, at block 420, the example speed enhancer 112 determines that there is additional media to process, then control returns to block 402 to select another recorded media to process. If, at block 420, the example speed enhancer 112 determines that there is not additional media process, the example program 400 of
If, at block 502, the digital recording/playback device 104 determines that recorded media is selected for playback at an enhanced rate, then, at block 504, the example speed enhancer 112 (
The processor platform 600 of the illustrated example includes a processor 612. The processor 612 of the illustrated example is hardware. For example, the processor 612 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer.
The processor 612 of the illustrated example includes a local memory 613 (e.g., a cache). The processor 612 of the illustrated example executes the instructions to implement the example speed enhancer 112, the example audio parser 205, the example candidate identifier 210, the example target frames calculator 215, the example frames selector 220, the example media renderer 225 and the example audio filter 230. The processor 612 of the illustrated example is in communication with a main memory including a volatile memory 614 and a non-volatile memory 616 via a bus 618. The volatile memory 614 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAIVIBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory 616 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 614, 616 is controlled by a memory controller.
The processor platform 600 of the illustrated example also includes an interface circuit 620. The interface circuit 620 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.
In the illustrated example, one or more input devices 622 are connected to the interface circuit 620. The input device(s) 622 permit(s) a user to enter data and commands into the processor 612. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 624 are also connected to the interface circuit 620 of the illustrated example. The output devices 624 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 620 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.
The interface circuit 620 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 626 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform 600 of the illustrated example also includes one or more mass storage devices 628 for storing software and/or data. Examples of such mass storage devices 628 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The example mass storage 628 implements the example media database 110.
The coded instructions 632 of
From the foregoing, it will be appreciated that the above disclosed methods, apparatus and articles of manufacture perform speed-enhanced playback of recorded media. Examples disclosed herein perform the speed-enhanced playback of recorded media by dropping audio frames and the corresponding video frames during playback. To determine candidate audio frames for dropping, disclosed examples compare the number of skip bytes included in an audio frame to the skip bytes threshold. If the number of skip bytes included in the audio frame satisfies the skip bytes threshold, the audio frame is identified as a candidate frame for dropping. Thus, disclosed examples identify candidate frames for dropping without performing processor intensive steps of decompressing audio frames.
In some examples, whether a candidate frame is dropped during playback is determined based on a target playback rate and the total number of audio frames in the media. For example, disclosed examples may calculate a target number of frames to drop based on the target playback rate and the total number of audio frames in the media. Disclosed examples then iteratively select candidate frames to drop based on the number of skip bytes included in the candidate frame until the target number of frames to drop is satisfied. During playback of the media, disclosed examples drop the selected candidate frames and the corresponding video frames. Thus, disclosed examples improve on digital recording/playback devices by enabling speed-enhanced playback of recorded media. Because dropping audio frames may result in audio artifacts (e.g., skips) during playback, disclosed examples also filter the audio (e.g., via a low-pass filter) to smooth audio discontinuities, thereby providing an enjoyable media presentation to the user.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
This patent arises from a continuation of U.S. patent application Ser. No. 17/099,385, now U.S. Pat. No. 11,270,735, filed on Nov. 16, 2020, which is a continuation of U.S. patent application Ser. No. 15/947,638, now U.S. Pat. No. 10,839,854, filed on Apr. 6, 2018, which is a continuation of U.S. patent application Ser. No. 15/251,561, now U.S. Pat. No. 9,940,968, filed on Aug. 30, 2016. Priority to U.S. patent application Ser. No. 17/099,385, U.S. patent application Ser. No. 15/947,638, and U.S. patent application Ser. No. 15/251,561 is claimed. The entireties of U.S. patent application Ser. No. 17/099,385, U.S. patent application Ser. No. 15/947,638, and U.S. patent application Ser. No. 15/251,561 are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4393415 | Hurst | Jul 1983 | A |
5377051 | Lane et al. | Dec 1994 | A |
5809454 | Okada | Sep 1998 | A |
7480446 | Bhadkamkar | Jan 2009 | B2 |
7480447 | Schultz et al. | Jan 2009 | B2 |
7826712 | Minnick et al. | Nov 2010 | B2 |
8136140 | Hodge | Mar 2012 | B2 |
8244094 | Winter et al. | Aug 2012 | B2 |
8295687 | Kuspa | Oct 2012 | B1 |
8457478 | Minnick et al. | Jun 2013 | B2 |
8472783 | Alexander | Jun 2013 | B2 |
8472791 | Gargi | Jun 2013 | B2 |
8515254 | Kikuchi | Aug 2013 | B2 |
8705942 | Evans et al. | Apr 2014 | B2 |
8934762 | Schmit et al. | Jan 2015 | B2 |
9020042 | Amir et al. | Apr 2015 | B2 |
9037275 | Abe et al. | May 2015 | B2 |
9940968 | Srinivasan | Apr 2018 | B2 |
10839854 | Srinivasan | Nov 2020 | B2 |
20030131350 | Peiffer et al. | Jul 2003 | A1 |
20070270986 | Mizoguchi et al. | Nov 2007 | A1 |
20080125891 | Sakakibara et al. | May 2008 | A1 |
20080131075 | Pontual | Jun 2008 | A1 |
20120057843 | Otani | Mar 2012 | A1 |
20130148940 | Schmit | Jun 2013 | A1 |
20140186014 | Wordley | Jul 2014 | A1 |
20140277652 | Watts | Sep 2014 | A1 |
20150078562 | Shanmugasundaram et al. | Mar 2015 | A1 |
20150110474 | Chen et al. | Apr 2015 | A1 |
20150332732 | Gilson | Nov 2015 | A1 |
20170195700 | Jin | Jul 2017 | A1 |
20180061453 | Srinivasan | Mar 2018 | A1 |
20180063008 | Hammarqvist | Mar 2018 | A1 |
20180226100 | Srinivasan | Aug 2018 | A1 |
20210074327 | Srinivasan | Mar 2021 | A1 |
Number | Date | Country |
---|---|---|
2005096300 | Oct 2005 | WO |
Entry |
---|
Amir et al., “Using Audio Time Scale Modification for Video Browsing,” Proceedings of the 33rd Hawaii International Conference on System Sciences, Jan. 2000, 10 pages. |
Rowe et al., “MPEG Video in Software: Representation, Transmission, and Playback,” Symposium on Electronic Imaging Science and Technology, Feb. 1994, 11 pages. |
Cen et al., “A Distributed Real-Time MPEG Video Audio Player,” International Workshop on Network and Operating System Support for Digital Audio and Video, Apr. 1995, 12 pages. |
Li et al., “Browsing Digital Video Technical Report,” Microsoft Corporation, Sep. 20, 1999, 10 pages. |
United States Patent and Trademark Office, “Notice of Allowance and/or Fees Due,” issued in connection with U.S. Appl. No. 15/251,561, dated Dec. 20, 2017, 10 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 15/947,638, dated Jul. 10, 2020, 16 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 15/947,638, dated Apr. 16, 2020, 12 pages. |
United States Patent and Trademark Office,“Notice of Allowance,” issued in connection with U.S. Appl. No. 15/947,638, dated Jul. 10, 2020, 5 pages. |
United States Patent and Trademark Office,“Notice of Allowance,” issued in connection with U.S. Appl. No. 17/099,385, dated Oct. 26, 2021, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20220189509 A1 | Jun 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17099385 | Nov 2020 | US |
Child | 17688736 | US | |
Parent | 15947638 | Apr 2018 | US |
Child | 17099385 | US | |
Parent | 15251561 | Aug 2016 | US |
Child | 15947638 | US |