This application is the U.S. national phase of International Application No. PCT/GB2009/000753 filed 20 Mar. 2009, which designated the U.S. and claims priority to EP Application No. 08251238.5 filed 31 Mar. 2008, the entire contents of each of which are hereby incorporated by reference.
The present invention relates to media encoding and in particular to an apparatus for and method of encoding video content making use of semantic data.
In recent years, digital media has become a commonplace carrier for delivering information to users. In particular, digital video allows users to obtain information through visual and audio means.
In its most basic form, digital video is composed of a sequence of complete image frames which are played back to the user at a rate of several frames per second. The quality of the video depends on the resolution of each frame, and also the rate at which frames are displayed. Higher resolution means that more detail can be included in each frame whilst higher frame rates improve the user's perception of movement in the video.
Increasing quality of video content results in larger file sizes which is undesirable in many applications. Encoding techniques, and in particular video compression techniques are known which aim to reduce file sizes while minimizing any loss in quality of the video. Video compression techniques generally fall into two groups: spatial compression and temporal compression, with many common video compression formats using a combination of both techniques.
Spatial compression involves applying compression to each individual image frame, for example in a manner similar to JPEG compression for still images.
Temporal compression exploits similarities in sequences of consecutive frames to reduce the information storage requirements. In many videos, significant parts of the scene do not change over time. In this case, the scene information from a previous scene can be re-used for rendering the next scene while only information relating to the changed pixels is stored. This can result in significant reductions in file size. Similarly, where the camera pans across a scene, a significant portion of the new frame is identical to the previous scene but offset in the direction of the pan. In this case only the newly viewable pixels would need to be encoded.
In a video compression such as MPEG-2, complete information frames are called Full Frames or I-frames (Independent frames). These frames are independent of other frames and can therefore be decoded without referring to any information in any other frames of the video. The main compression savings are made by converting the uncompressed video frames into dependent frames. These are frames which are dependent on some information from an adjacent frame in order to be successfully decoded. Dependent frames which are dependent on preceding frames are called Predictive Frames or P-Frames and frames which are dependent on both preceding and following frames are known as B-frames.
Whilst use of I-frames, P-frames and B-frames provides valuable file size savings, temporal compression techniques can inconvenience the user's viewing experience. For example, a user may wish to skip to a specific position in the file and begin playback from that position instead of watching the entire video in order.
If an I-frame is located in the video file at the user's selected position, then playback can begin from the selected position. However, if an I-frame is not present at the desired location, then in most cases, the video decoder will seek to the nearest I-frame location. The user must then wait for the desired segment of the video file to be played.
One known way to address the above problem is to insert more I-frames into the compressed video file. In addition to I-frames located at the scene switching points, I-frames are inserted at regular intervals, for example every second, or every 20 frames so that the granularity of the video segments is improved. However, the presence of more I-frames increases the file size of the video.
The present invention addresses the above problems.
In one aspect the present invention provides a method of encoding media content into a sequence of independent data frames and dependent data frames, the method comprising: analysing the media content to determine where scene changes occur within the media content; generating encoding parameters defining the location of said scene changes; accessing data indicating semantically significant sections of the media content; and updating the encoding parameters so that independent data frames are present at locations indicated by the semantic data.
In another aspect, the present invention provides an apparatus for encoding media content into a sequence of independent data frames and dependent data frames; the apparatus comprising: means for analysing the visual content of the media content; a configuration data store indicating the location of scene changes in the media content; accessing means for accessing data indicating semantically significant sections of the media content; means for updating the configuration data store to include full-frames at locations indicated by the semantic data.
In a further aspect, the present invention provides an encoded media file formed of a sequence of independent data frames and dependent data frames, the independent media frames being located at semantically significant parts of the media file.
Other preferable features are set out in the dependent claims.
Embodiments of the present invention will now be described with reference to the accompanying figures in which:
In the first embodiment, the encoding system processes uncompressed video files to generate corresponding compressed video files having I-frames located at scene changes within the video and P-frames or B-frames for other frames as is conventional. Additionally, the encoder uses semantically significant data such as narrative information to add further I-frames at positions within the video which are not scene changes but are narratively significant.
At step s1, the encoder 15 accesses the uncompressed video file 21. In step s3, the encoder 15 performs a first pass of the accessed video 21 to identify where scene changes occur. The locations of the scene changes within the video file 21 are stored in a configuration file 29 stored in the working memory 5. In this embodiment, the video encoder 15 stores the frame number of each frame where a scene change occurs. For example:
Frame 0;
Frame 56;
Frame 215;
Frame 394;
Frame 431;
Frame 457;
Frame 1499.
Returning to
As shown in
The narration data file 27 and the generation of the narration data file 27 will now be explained.
In this embodiment, the narration data file 27 is generated by the user who produces the uncompressed video input 21. The producer carries out a manual process to mark the start of segments of the video which may be of narrative interest to any end users who view the final video. Examples of narrative interests include: the start of speech by a certain actor, the start of an action sequence, the start of a musical piece etc. The points of narrative interest are not limited to events in the audio track but also include visual events which do not cause a change of scene. For example, a motion freeze, or an actor walking into the scene.
In the first embodiment, a compressed video file corresponding to an input uncompressed video file is generated having additional I-frames at locations where the video producer has manually specified segments of particular interest. In this way, a user who later views the compressed video has the ability to seek to particularly interesting parts of the video.
In the first embodiment, the video encoder produces compressed MPEG-2 videos from an input uncompressed video file using a two-pass encoding scheme. In the second embodiment, the video encoder compresses the input video file using a single pass encoding scheme.
In the first and second embodiments, the video encoder processed uncompressed video input and produced compressed video data having I-frames placed in accordance with segments of interest as determined by the video producer as well as the conventional placement of I-frames based on scene changes.
In the third embodiment, the system allows I-frames representing points of narrative interest to be added to video files which are already compressed. This is useful in cases where a part of the video only becomes of narrative interest once it has been made available for a length of time.
In the above embodiments, the video encoder is arranged to produce compressed video files having I-frames located at scene transitions within the video, and also at locations specified in a narration file defined by the producer of the video, or any user who wishes to add I-frames to a video.
In many videos, when a scene change occurs, there will often be a slight time delay between the start of the new segment of the video and the start of any video content which is narratively significant. In later video editing tasks, for example to merge separate videos, or extracts from a single video into a composite video, it is desirable to filter out the narratively unimportant content.
In the fourth embodiment, the encoder is further operable to insert additional I-frames after either a scene change I-frame, or a semantic I-frame.
In the first to fourth embodiments, the encoder is arranged to insert I-frames at specified locations in the video according narratively important video content in a video file as set out in the narration data file 81.
In the fifth embodiment, in addition to inserting I-frames, the video encoder is arranged to emphasize the semantically important sections or an input video by changing the quality of the output video for frames following the inserted I-frame.
The physical and functional components of the encoding system in the fifth embodiment are similar to those of the previous embodiments, the only differences being in the narration data file 81 and the encoder.
The encoder receives information from the narration data file 81 and in response, inserts I-frames and also increases the number of bits allocated to encoding (hereinafter referred to as the bit rate) the sections of the video which are narratively important.
In some cases, the narratively important sections will coincide with sections which the video encoder would normally consider to require more bits. However, in other situations, for example where a particularly important speech is being delivered but the video background is not changing significantly, the video encoder will allocate a higher bit rate to the sections defined in the narration file. If there are restraints on the allowable bit rate for the video of file size, then the video encoder will allocate more bit rate to the narratively important sections and set a lower bit rate for other parts of the video.
In the embodiments, the video encoder generated MPEG2 video files. It will appreciated that any compression format performing temporal frame compression could be used. For example, WMV or H.264.
In the first embodiment, the encoder uses a 2-pass encoding scheme. In an alternative, a multi-pass encoding scheme is used. For example a three or four pass encoding scheme.
In the embodiments, the narration file is manually generated by a user of the system. In an alternative, the narration file is generated without user action. An audio processor analyses the audio stream within the video file to determine when speech occurs and populates the narration file.
Number | Date | Country | Kind |
---|---|---|---|
08251238 | Mar 2009 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/GB2009/000753 | 3/20/2009 | WO | 00 | 9/30/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2009/122129 | 10/8/2009 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5903673 | Wang et al. | May 1999 | A |
6154771 | Rangan et al. | Nov 2000 | A |
6462754 | Chakraborty et al. | Oct 2002 | B1 |
7336890 | Lu et al. | Feb 2008 | B2 |
20030210821 | Yogeshwar et al. | Nov 2003 | A1 |
20050207442 | Zoest et al. | Sep 2005 | A1 |
20060083299 | Kitajima | Apr 2006 | A1 |
20070025687 | Kim | Feb 2007 | A1 |
20070081587 | Raveendran et al. | Apr 2007 | A1 |
20070081588 | Raveendran et al. | Apr 2007 | A1 |
20070286279 | Hamanaka | Dec 2007 | A1 |
20100067882 | Axen et al. | Mar 2010 | A1 |
Number | Date | Country |
---|---|---|
1630744 | Mar 2006 | EP |
Entry |
---|
International Search Report for PCT/GB2009/000753, mailed Jun. 17, 2009. |
Javier, R. Hidalgo et al., “Metadata-based coding tools for hybrid video codecs” 23. Picture Coding Symposium, (Apr. 23, 2003). |
Number | Date | Country | |
---|---|---|---|
20110026610 A1 | Feb 2011 | US |