The present invention relates to the use of video and, in particular, to the insertion of extra or additional content into video.
The field of multimedia communications has seen tremendous growth over the past decade, leading to vast improvements that allow real-time computer-aided digital effects to be introduced into video presentations. For example, methods have been developed for the purpose of inserting advertising image/video overlays into selected frames of a video broadcast. The inserted advertisements are implanted in a perspective-preserving manner that appears to be part of the original video scene to a viewer.
A typical application for such inserted advertisements is seen in the broadcast videos of sporting events. Because such events are often played at a stadium, which is a known and predictable playing environment, there will be known regions in the viewable background of a camera view that is capturing the event from a fixed position. Such regions include advertising hoardings, terraces, spectator stands, etc.
Semi-automated systems exist which make use of the above fact to determine information to implant advertisements into selected background regions of the video. This may be provided via a perspective-preserving mapping of the physical ground model to the video image co-ordinates. Advertisers then buy space in a video to insert their advertisements into the selected image regions. Alternatively, one or more authoring stations are used to interact with the video feed manually to designate image regions useful for virtual advertisements.
U.S. Pat. No. 5,808,695, issued on 15 Sep. 1998 to Rosser et al. and entitled “Method of Tracking Scene Motion for Live Video Insertion Systems”, describes a method for tracking motion from image field to image field in a sequence of broadcast video images, for the purpose of inserting indicia. Static regions in the arena are manually defined and, over the video presentation, these are tracked to maintain their corresponding image co-ordinates for realistic insertion. Intensive manual calibration is needed to identify these target regions as they need to be visually differentiable so as to facilitate motion tracking. There is also no way to allow the insertion to be occluded by moving images from the original video content, thereby rendering the insertion to be highly intrusive to the end viewers.
U.S. Pat. No. 5,731,846, issued on 24 Mar. 1998 to Kreitman et al. and entitled “Method and System for Perspectively Distorting an Image and Implanting Same into a Video Stream” describes a method and apparatus for image implantation that incorporates a 4-colour Look-Up-Table (LUT) to capture different objects of interest in the video scene. By selecting the target region to be a significant part of the playing field (inner court), the inserted image appears to be intruding into the viewing space of the end viewers.
U.S. Pat. No. 6,292,227, issued on 18 Sep. 2001 to Wilf et al. and entitled “Method and Apparatus for Automatic Electronic Replacement of Billboards in a Video Image” describes apparatus to replace an advertising hoarding image in a video image automatically. Using an elaborate calibration set-up that relies on camera sensor hardware, the image locations of the hoarding are recorded and a chroma colour surface is manually specified. During live camera panning, the hoarding image locations are retrieved and replaced by virtual an advertisement using the chroma-keying technique.
Known systems need intensive labour to identify suitable target regions for advertisement insertion. Once identified, these regions are fixed and no other new regions allowable. Hoarding positions are identified because those are the most natural regions that viewers would find advertising information. Perspective maps are also used to attempt realistic advertisement implantation. These efforts collectively contribute to elaborate manual calibration.
There is a conflicting requirement between the continual push for greater advertising effectiveness amongst advertisers, and the viewing pleasure of the end-viewers. Clearly, realistic virtual ad implants on suitable locations (such as advertising hoardings) are compromises enabled by current 3D graphics technology. However, there are only so many hoardings within the video image frames. As a result advertisers push for more spaces for advertisement implantation.
According to one aspect of the present invention, there is provided a method of inserting additional content into a video segment of a video stream, the video segment comprising a series of video frames. The method comprises: receiving the video segment, determining a frame content, determining suitability for insertion and inserting the additional content. Determining a frame content is determining the frame content of at least one frame of the video segment. Determining the suitability of insertion of additional content is based on the determined frame content. Inserting the additional content is inserting the additional content into the frames of the video segment depending on the determined suitability.
According to another aspect of the present invention, there is provided a method of inserting further content into a video segment of a video stream, the video segment comprising a series of video frames. The method comprises receiving the video stream, detecting static spatial regions within the video stream and inserting the further content into the detected static spatial regions.
According to a third aspect of the present invention, there is provided video integration apparatus operable according to the method of either above aspect.
According to a fourth aspect of the present invention, there is provided video integration apparatus for inserting additional content into a video segment of a video stream, the video segment comprising a series of video frames. The apparatus comprises means for receiving the video segment, means for determining the frame content, means for determining at least one first measure and means for inserting the additional content. The means for determining the frame content determines the frame content of at least one frame of the video segment. The means for determining at least one first measure determines at least one first measure for the at least one frame indicative of the suitability of insertion of additional content, based on the determined frame content. The means for inserting inserts the additional content into the frames of the video segment depending on the determined at least one first measure.
According to a fifth aspect of the present invention, there is provided video integration apparatus for inserting further content into a video segment of a video stream, the video segment comprising a series of video frames. The apparatus comprises means for receiving the video stream, means for detecting static spatial regions within the video stream and means for inserting the further content into the detected static spatial regions.
According to a sixth aspect of the present invention, there is provided apparatus according to the fourth or fifth aspects operable according to the method of the first or second aspect.
According to a seventh aspect of the present invention, there is provided a computer program product for inserting additional content into a video segment of a video stream, the video segment comprising a series of video frames. The computer program product comprises a computer usable medium and a computer readable program code means embodied in the computer usable medium and for operating according to the method of the first or second aspect.
According to an eighth aspect of the present invention, there is provided a computer program product for inserting additional content into a video segment of a video stream, the video segment comprising a series of video frames. The computer program product comprises a computer usable medium and a computer readable program code means embodied in the computer usable medium. When the computer readable program code means is downloaded onto a computer, it renders the computer into apparatus as according to any one of the third to the sixth aspects.
Using the above aspects, there can be provided methods and apparatus that insert virtual advertisements or other virtual contents into a sequence of frames of a video presentation by performing real-time content-based video frame processing to identify suitable locations in the video for implantation. Such locations correspond to both the temporal segments within the video presentation and the regions within an image frame that are commonly considered to be of lesser relevance to the viewers of the video presentation. This invention presents a method and apparatus that allows a non-intrusive means to incorporate additional virtual content into a video presentation, facilitating an additional channel of communications to enhance greater video interactivity.
The present invention is further described by way of non-limitative example, with reference to the accompanying drawings, in which:—
Embodiments of the present invention are able to provide content-based video analysis that is capable of tracking the progress of a video presentation, and assigning a first relevance-to-viewer measure (FRVM) to temporal segments (frames or frame sequences) of the video and finding spatial segments (regions) within individual frames in the video that are suitable for insertion.
Using video of association football (soccer) as an example, and referred to hereafter simply as football, it would not be unreasonable to generalise that viewers are focused on the immediate area around the soccer ball. The relevance to the viewer of the content goes down for regions of the image the further they are concentrically from the ball. Likewise, it would not be unreasonable to judge that a scene where the camera view is focused on the crowd, which is usually of no relevance to the game, is of lesser relevance to the viewer, as would be a player-substitution scene. Compared to scenes where there is high global motion, there is player build-up or the play is closer to the goal-line, the crowd scenes and player-substitution scenes are of lesser importance to the play.
Embodiments of the invention provide a system, method and software for inserting content into video presentations. For ease of terminology, the term “system” alone will generally be used. However, no specific limitation is intended to exclude methods, software or other ways of embodying or using the invention. The system determines an appropriate target region for content implantation to be relatively non-intrusive to the end viewers. These target regions may appear at any arbitrary location in the image, as are determined to be sufficiently non-intrusive by the system.
The relevant portions of the system 10 as appear in
One or more cameras 20 are set up at the venue site 12. In a typical configuration for filming a sporting event such as a football match (as is used for the sake of example throughout much of this description), broadcast cameras are mounted at several peripheral view-points surrounding the soccer field. For instance, this configuration usually minimally involves a camera located at a position that overlooks the centre field line, providing a grand-stand view of the field. During the course of play, this camera pans, tilts and zooms from this central position. There may also be cameras mounted in the corners or closer to the field, along the sides and ends, in order to capture the game action from a closer view. The varied video feeds from the cameras 20 are sent to the central broadcast studio 14, where the camera view to be broadcast is selected, typically by a broadcast director. The selected video is then sent to the local distribution point 16, that may be geographically spaced from the broadcast studio 14 and the venue 12, for instance in a different city or even a different country.
In the local broadcast distributor 16, additional video processing is performed to insert content (typically advertisements) that may, usefully be relevant to the local audience. Relevant software and systems sit in a video integration apparatus within the local broadcast distributor 16, and select suitable target regions for content insertion. The final video is then sent to the viewer's site 18, for viewing by way of a television set, computer monitor or other display.
Most of the features described in detail herein take place within the video integration apparatus in the local broadcast distributor 16 in this embodiment. Whilst the video integration apparatus is described here as being within the local broadcast distributor 16, it may instead be within the broadcast studio 14 or elsewhere as required. The local broadcast distributor 16 may be a local broadcaster or even an internet service provider.
The video signal stream is received (step S102) by the apparatus. As the original video signal stream is received, the processing apparatus performs segmentation (step S104) to retrieve homogenous video segments, which are homogenous both temporally and spatially. The homogenous video segments correspond to what are commonly called “shots”. Each shot is a collection of frames from a continuous feed from the same camera. For football, the shot length might typically be around 5 or 6 seconds and is unlikely to be less than 1 second long. The system determines the suitability of separate video segments for content insertion and identifies (step S106) those segments that are suitable. This process of identifying such segments is, in effect, answering the question of “WHEN TO INSERT”. For those video segments which are suitable for content insertion, the system also determines the suitability of spatial regions within a video frame for content insertion and identifies (step S108) those regions that are suitable. The process of identifying such regions is, in effect, answer the question of “WHERE TO INSERT”. Content selection and insertion (step S110) then occurs in those regions where it is found suitable.
The frames and their associated image attributes generated at the frame-level processing module 22 proceed to a first-in first-out (FIFO) buffer 24, where they undergo a slight delay as they are processed for insertion, before being broadcast. A buffer-level processing module (whether a hardware or software processor, unitary or non-unitary) 26 receives attribute records for the frames in the buffer 24, generates and updates new attributes based on the input attributes, sending the new records to the buffer 24, and makes the insertions into the selected frames before they leave the buffer 24.
The division in processing between frame-level processing and buffer-level processing is generally between raw data processing vs. meta data processing. The buffer-level processing is more robust, as it tends to rely on statistical aggregation.
The buffer 24 provides video context to aid in insertion decisions. A relevance-to-viewer measure FRVM, is determined within the buffer-level processing module 26 from the attribute records and context. The buffer-level processing module 26 is invoked for each frame that enters the buffer 24 and it conducts relevant processing on each frame within one frame time. Insertion decisions can be made on a frame by frame basis or for a whole segment on a sliding window basis or for a shot, in which case insertion is made for all the frames within the segment and no further processing of the individual frames is necessary.
The determination processes for determining “When” and “Where” to insert content (steps S106 and S108) are described now in more detail with reference to the flowchart of
As a result of segmentation (step S104 of
As the video frames are received by the video integration apparatus, they are analysed for their feasibility for content insertion. The decision process is augmented by a parameter data set, which includes key important decision parameters and the thresholds needed for the decisions.
The parameter set is derived via an off-line training process, using a training video presentation of the same type of subject matter (e.g. a football game for training use of the system on a football game, a rugby game for training use of the system on a rugby game, a parade for training use of the system on a parade). Segmentation and relevance scores in the training video are provided by a person viewing the video. Features are extracted from frames within the training video and, based on these and the segmentation and relevance scores, the system learns statistics such as video segment duration, percentage of usable video segments, etc, using a relevant learning algorithm. This data is consolidated into the parameter data set to be used during actual operation.
For instance, the parameter set may specify a certain threshold for the colour statistics of the playing field. This is then used by the system to segment the video frame into regions of playing field and non playing field. This is a useful first step in determining active play zones within the video frame. It would be commonly accepted as a fact that non-active play zones are not the focal point for end viewers and therefore can be attributed with a lesser relevance measure. While the system relies on the accuracy of the parameter set that is trained via an off-line process, the system also performs its own calibration with respect to content based statistics gathered from the received video frames of the actual video into which content is to be inserted. During this bootstrapping process or initialisation step, no content is inserted. The time duration for this bootstrap is not long and, considering the entire duration of the video presentation, is merely a fraction of opportunity time lost in content viewing. The calibration can be based on comparison with previous games, for instance at whistle blowing, or before, when viewers tend to take a more global interest in what is on screen.
Whenever a suitable region inside a frame, within a video segment, is designated for content insertion, content is implanted into that region, and typically stays exposed for a few seconds. The system determines the exposure time duration for the inserted content based on information from the off-line learning process. Successive video frames of a homogenous video segment remain visually homogenous. Thus it is highly likely that the target region, if it is deemed to be non-intrusive in one frame, and therefore suitable for content insertion, would stay the same for the rest of the video segment and therefore for the entire duration of the few seconds of inserted content exposure. For the same reason, if no suitable insertion region can be found, the whole video segment can be rejected.
The series of computation steps in
There may also be a question of “WHAT TO INSERT”, if there is more than one possibility, and this may depend upon the target regions. The video integration apparatus of this embodiment also includes selection systems for determining insertion content suitable for the geometric sizes and/or locations of the designated target regions. Depending on the geometrical property of the target regions so determined by the system, a suitable form of content might then be implanted. For instance, if a small target region is selected, then a graphic logo might be inserted. If an entire horizontal region is deemed suitable by the system, then an animated text caption might be inserted. If a sizeable target region is selected by the system, a scaled-down video insert may be used. Also different regions of the screen may attract different advertising fees and therefore content may be selected based on the importance of the advertisement or level of fees paid.
In
Table 1 lists various video segment categories and examples of FRVMs that might be applied to them.
The values from the table are used by the system in allocating FRVMs and can be adjusted on-site by an operator, even during a broadcast. One effect of modifying the FRVMs in the respective categories will be to modify the rate of occurrence of content insertion. For example, if the operator were to set all the FRVMs in Table 1 to be zero, denoting low relevance to viewer measure for all types of video segments, then during presentation, the system will find more instances of video segments with a FRVM passing the threshold comparison, resulting in more instances of content insertion. This might appeal to a broadcaster when game time is running out, but he is still required to display more advertising content (for instance if a contract requires that an advertisement will be displayed a minimum number of times or for a minimum total length of time). By changing the FRVM table directly, he changes the rate of occurrence of virtual content insertion. The values in Table 1 may also be used as a way of distinguishing free to view broadcasting (high FRVM values), against pay to view broadcasting (low FRVM values) of the same event. Different values in Table 1 would be used for the feeds of the same broadcast to different broadcast channels.
The decision on whether video segments are suitable for content insertion is determined by comparing the FRVM of one frame against a defined threshold. For example, insertion may only be allowed where the FRVM is 6 or lower. The threshold value may also or instead be changed as a way of changing the amount of advertising that appears. When a video segment has thus been deemed to be suitable for content insertion, one or more video frames are analysed to detect suitable spatial regions for the actual content insertion.
Determining Suitable Video Frames for Content Insertion (WHEN TO INSERT?) [Step S106 of
In determining the feasibility of the current video segment for content insertion, a or the principal criteria is the relevance measure of the current frame, with respect to the current thematic progress of the original content. To achieve this, the system uses content-based video processing techniques that are well-known to those skilled in the field. Such well-known techniques include those described in: “An Overview of Multi-modal Techniques for the Characterization of Sport Programmes”, N. Adami, R. Leonardi, P. Migliorati, Proc. SPIE -VCIP'03, pp. 1296-1306, 8-11 July, 2003, Lugano, Switzerland, and “Applications of Video Content Analysis and Retrieval”, N. Dimitrova, H-J Zhang, B. Shahraray, I. Sezan, T. Huang, A. Zakhor, IEEE Multimedia, Vol. 9, No. 3, July-September 2002, pp. 42-55
A Hough-Transform based line detection technique, using a Hough-Transform is used to detect major line orientations (step S142). A RGB spatial colour histogram is determined to work out if a frame represents a shot change and also to determine field and non-field regions (step S144). Global motion is determined between successive frames (step S146) and also on single frames based on encoded movement vectors. Audio analysis techniques are used to track the audio pitch and excitement level of the commentator, based on successive frames and segments (step S148). The frame is classified as a field/non-field frame (step S150). A least square fitting is determined, to detect the presence of an ellipse (step S152). There may be other operations as well or instead depending on the event being broadcast.
Signals may also be provided from the cameras, either supplied separately or coded onto the frames, indicating their current pan and tilt angles and zoom. As these parameters define what is on the screen in terms of the part of the field and the stands, they can be very useful in helping the system identify what is in a frame.
The outputs of the various operations are analysed together to determine both segmentation and the current video segment category and the game's thematic progress (step S154). Based on the current video segment category and the game's thematic progress, the system allocates a FRVM, using the available values for each category of video segment from Table 1.
For example, where the Hough-Transform based line detection technique indicates relevant line orientations and the spatial colour histogram indicates relevant field and non-field regions, this may indicate the presence of a goal mouth. If this is combined with commentator excitement, the system may deem that goal mouth action is in progress. Such a segment of video is of the utmost relevance to the end viewers, and the system would give the segment a high FRVM (e.g. 9 or 10), thereby restraining from content insertion. The Hough Transform and elliptical least square fitting, are very useful for this specific determination of mid-field frames, each of which processes is a well understood and state-of-art technique in content-based image analysis.
Assuming that a previous video segment was of goal mouth action, the system might next, for example, detect that the field of play has changed, via a combination of the content based image analysis techniques. The intensity in the audio stream has calmed, global camera motion has slowed, and the camera view is now focused on a non-field view, for example that of a player close-up (e.g. FRVM<=3). The system then deems this to be an opportune time for content insertion.
Various methods are now described, which relate to some of the processes that may be applied in generating FRVMs. The embodiments are not necessarily limited by way of having to have any or all of these or only having these methods. Other techniques may be used as well or instead.
Once the system has determined where shots begin and end, shot attributes are determined on a shot by shot basis within the buffer. The buffer-level processing module collates images within a shot and computes the shot-level attributes. The sequence of shot attributes that is generated represents a compact and abstract view of the video progression. These can be used as inputs to a dynamic learning model for play break detection.
For incoming frames, the buffer-level processor determines an average of the global motion for the shot so far (step S226), an average of the dominant colour (averaging R, G B) for the shot so far (step S228), as well as an average of the audio energy for the shot so far (step S230). The three new averages are used to update the shot attributes for the current shot, in this example becoming those attributes (step S232). If the current frame is the last frame in the shot (step S234), the current shot attributes are quantized into discrete attribute values (step S236) before being written to the shot attribute record for the current shot. If the current frame is not the last frame in the shot (step S234), the next frame is used to update the shot attribute values.
The play break detection process described with reference to
In an alternative embodiment, a shorter buffer length is possible, using a continuous HMM, without quantization. Shots are limited in length to around 3 seconds; the HMM takes features from every third frame in the buffer and, on the determination of a play break, sets the FRVM for every frame in the buffer at the time as if it were a play break. Disadvantages of such an approach include limiting the shot lengths and the fact that the HMM requires a larger training set.
It will be readily apparent that there may be more or fewer steps of differing order than are illustrated here without departing from the invention. For example, in the field/non-field classification step S150 in
If it is determined that a frame is a field view, then the image attributes for the frame are updated to reflect this. Additionally the image attributes may be updated with further image attributes for use in determining if the current frame is of mid-field play. The attributes used to determine mid-field play are the presence of a vertical field line, with co-ordinates, global motion and the presence of an elliptical field mark.
Other field view shots can be merged into sequences in a similar manner. However, if the views are mid-field, there is a lower FRVM than for other sequences of field views.
Audio can also be useful for determining a FRVM.
Sometimes, a single frame or shot may have various FRVM values associated with or generated for it. The FRVM that applies depends on the precedence of the various determinations that have been made in connexion with the shot. Thus a play break determination will have precedence over an image which, during the normal course of play, such as around the goal, might be considered very relevant.
Determining Suitable Spatial Regions within a Video Frame for Content Insertion (WHERE TO INSERT?) [Step S108 of
After a video segment has been determined to be suitable for content insertion, the system needs to know where to implant the new content (if anywhere). This involves identifying spatial regions within the video frame positioned such that, when new content is implanted therein, it will cause minimal (or acceptable) visual disruption to the end-viewer. This is achieved by segmenting the video frame into homogenous spatial regions, and inserting content into spatial regions considered to have a low RRVM, for instance lower than a pre-defined threshold.
The above description indicates that the largest blob of colour is chosen. This often depends on how the colour of the image is defined. In a football game the main colour is green. Thus, the process may simply define each portion as green or non-green. Further, the colour of the region that is selected may be important. For some types of insertion, insertion may only be intended over a particular region, pitch/non-pitch. For pitch insertion, it is only the size of the green areas that is important. For crowd insertion, it is only the size of the non-green areas that is important.
In a preferred embodiment of the present invention, the system identifies static unchanging regions in the video frames that are likely to correspond to a static TV logo, or a score/time bar. Such data necessarily occludes the original content to provide a minimal set of alternative information which may not be appreciated by most viewers. In particular, the implantation of a static TV logo is a form of visible watermark that broadcasters typically use for media ownership and identification purposes. However, such information pertains to the working of the business industry and in no way enhances the value of the video to end-viewers. Many people find them to be annoying and obstructive.
Detecting the locations of such static artificial images that are already overlaid on the video presentations and using these as alternative target regions for content insertion can be considered acceptable practice as far as the viewers are concerned, without infringing on the already limited viewing space of the video. The system attempts to locate such regions and others of low relevance to the thematic content of the video presentation. The system deems these regions to be non-intrusive to the end viewers, and therefore deems them suitable candidate target regions for content insertion.
The homogenous region computation for insertion in this particular process is implemented as a separate independent processing thread which accesses the FIFO buffer via critical sections and semaphores. The computation time is limited to the duration that the first image (within the FRVM sequence) is kept within the buffer before leaving the buffer for broadcast. The entire computation is abandoned if no suitable length sequence of static regions is found before the beginning of the sequence leaves the buffer, and no image insertion will be made. Otherwise, the new image is inserted into a same static region of every frame within the current FRVM sequence, after which, in this embodiment, these same frames are processed no further for insertion.
Every pixel that is unchanged over the last X frames (that are being checked, rather than necessarily X contiguous frames) is deemed to belong to a static region. In this case X is a number that is deemed suitable to decide whether a region is static. It is selected based on how long one would expect a pixel to stay the same for a non-static region and the gap between successive frames used for this purpose. For example with a time lag of 5 seconds between frames, X might be 6 (total time 30 seconds). In the case of an on-screen clock, the clock frame may stay fixed, but the clock value itself changes. This may still be deemed static based on an averaging (gap fill) determination for the interior of the clock frame.
Each pixel is continually or regularly analysed to determine if it changes, in order to ensure the currency of its static status registration. The reason is that these static logos may be taken off at different segments of the video presentation, and may appear again at a later time. A different static logo may also appear, at a different location. Hence, the system maintains the most current set of locations where static artificial images are present in the video frames.
Based on the updated image attributes, the frame stream is segmented into continuous sequences of mid-field frames (S370) with an FRVM below a threshold. A determination is made as to whether the current sequence is long enough for content insertion (e.g. at least around two seconds) (step S372). If the sequence is not long enough, the next sequence is selected at step S370. If the sequence is long enough, then for each frame, the X-ordinate of the mid-field line becomes the X-ordinate of the Insertion Region (IR) (step S374). For current frame i, the first field line (FLi) is found (step S376). The determination of the X-ordinate of the IR and the first field line (FLi) is completed for each frame of the sequence (steps S378, S380). A determination is made as to whether the change in field line position from frame to frame is smooth, that is that there is not a big FL variance (step S382). If the change is not smooth (there is a big variance), there is no insertion into the current sequence based on mid-field play dynamic insertion (step S384). If any change is smooth (the variance is not big), then for each frame i the Y-ordinate of IR becomes the FLi (step S386). The relevant image is then inserted into the IR of the frame (step S388).
Step S372, determining if the sequence is long enough is not necessary where the frames are only given the attribute of mid-field play frames if the sequence is long enough, as happens in the process illustrated in
In the embodiment of
In the flowchart of
The frame stream is segmented (step S420) into continuous sequences of frames with an FRVM below a certain threshold, each sequence being no longer than the buffer length. Within these frames the goal mouth is detected (step S422) (based on field/non-field determination, line determination etc.). If there is any frame where the detected position of the goal mouth appears to have jumped relative to its position in the surrounding frames around, this suggests an aberration and is termed an “outlier”. Such outlier frames are treated as if the goal mouth was not detected within them and those detected positions removed from the list of positions (step S424). Within the current sequence, gaps separating series of frames showing the goal mouth are detected (step S426), a gap, for example, being 3 or more frames where the goal mouth is not detected (or treated as not having been detected). Of the two or more series of frames separated by a detected gap, the longest series of frames showing the goal mouth is found (step S428) and a determination is made of whether this longest series is long enough for insertion (e.g. at least around 2 seconds long) (step S430). If the sequence is not long enough, the whole current sequence is abandoned for the purposes of goal mouth insertion (step S432). However, if that series is long enough, interpolation of the co-ordinates of the goal mouth is performed for any frames in that series where the goal mouth was not detected (or was detected but treated otherwise) (step S434). An Insertion Region is generated, the base of which is aligned with the top of the detected goal mouth, and the insert is inserted in this (moving) region of the image for every frame of the longest series (step S436).
The exemplary processes described with reference to
In the above description, various steps are performed in different flowcharts (e.g. computing global motion in
The present invention can be used with multimedia communications, video editing, and interactive multimedia applications. Embodiments of the invention allow innovation in methods and apparatus for implanting content such as advertisements into selected frame-sequences of a video presentation. Usually the insert will be an advertisement. However, it may be other material if desired, for instance news headlines or some such.
The above described system can be used to perform virtual advertisement implantation in a realistic way in order not to disrupt the viewing experience or to disrupt it only minimally. For instance, the implanted advertisement should not obstruct the view of the player possessing the ball during a football match.
Embodiments of the invention are able to implant advertisements into a scene in a fashion that still provides a reasonably realistic view to the end viewers, so that the advertisements may be seen as appearing to be part of the scene. Once the target regions for implant are selected, the advertisements may be selectively chosen for insertion. Audiences watching the same video broadcast in different geographical regions may then see different advertisements, advertising businesses and products that relevant to the local context.
Embodiments include an automatic system for insertion of content into a video presentation. Machine learning methods are used to identify suitable frames and regions of a video presentation for implantation automatically, and to select and insert virtual content into the identified frames and regions of a video presentation automatically. The identification of suitable frames and regions of a video presentation for implantation may include the steps of: segmenting video presentation into frames or video segments; determining and calculating distinctive features such as colour, texture, shape and motion, etc. for each frame or video segment; and identifying the frames and regions for implantation by comparing calculated feature parameters obtained from the learning process. The parameters may be obtained from an off-line learning process, including the steps of: collecting training data from similar video presentations (from video presentations recorded using a similar setting); extracting features from these training samples; and determining parameters by applying learning algorithms such as Hidden Markov Model, Neural Network, and Support Vector Machine, etc. to the training data.
Once relevant frames and regions have been identified, geometric information about the regions, and the content insertion time duration are used to determine the most appropriate type of content insertion. The inserted content could be an animation, static graphic logo, a text caption, a video insert, etc.
Content-based analysis of the video presentation is used to segment portions within the video presentations that are of lesser relevance to the thematic progress of the video. Such portions can be temporal segments, corresponding to a particular frame or scene and/or such portions can be spatial regions within a video frame itself.
Scenes of lesser relevance within a video can be selected. This provides flexibility in assigning target regions in the video presentation for content insertion. Embodiments of the invention can be fully automatic and run in a real-time fashion, and hence are applicable to both video-on-demand and broadcast applications. Whilst the invention may be best-suited to live broadcasts, it can also be used for recorded broadcasts.
The method and system of the example embodiment can be implemented on a computer system 500, schematically shown in
The computer system 500 comprises a computer module 502, input modules such as a keyboard 504 and mouse 506 and a plurality of output devices such as a display 508, and printer 510.
The computer module 502 is connected to the feed from the broadcast studio 14 via a suitable line, such as an ISDN line, and a transceiver device 512. The transceiver 512 also connects the computer to local broadcasting apparatus 514 (whether a transmitter and/or the Internet or a LAN) to output the integrated signal.
The computer module 502 in the example includes a processor 518, a Random Access Memory (RAM) 520 and a Read Only Memory (ROM) 522 containing the parameters and the inserts. The computer module 502 also includes a number of Input/Output (I/O) interfaces, for example I/O interface 524 to the display 508, and I/O interface 526 to the keyboard 504.
The components of the computer module 502 typically communicate via and interconnected bus 528 and in a manner known to the person skilled in the relevant art.
The application program is typically supplied to the user of the computer system 500 encoded on a data storage medium such as a CD-ROM or floppy disk and read utilising a corresponding data storage medium drive of a data storage device 550, or may be provided over a network. The application program is read and controlled in its execution by the processor 518. Intermediate storage of program data may be accomplished using the RAM 520.
In the foregoing manner, a method and apparatus for insertion of additional content into video are disclosed. Only several embodiments are described. However, it will be apparent to one skilled in the art in view of this disclosure that numerous changes and/or modifications may be made without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
200404282-6 | Jul 2004 | SG | national |