The present invention relates to a system which modifies the content of an image. More particularly, the present invention relates to a method and apparatus which electronically substitutes content in one or more areas of an image. In some aspects, the present invention relates to an image content detection method and apparatus suitable for use with television broadcast video images.
WO 01/58147 (Rantalainen) describes a method for modifying television video images, wherein a billboard or other visible object is identified with non-visible electromagnetic radiation, such as infra-red light, and selected areas within the video image are replaced with alternate images appropriate to specific viewer groups or geographical regions. For example, billboards at a ground or arena of a major sporting event are observed as part of a television broadcast. Selected areas within the television video images are electronically substituted by alternate images that are more appropriate for a particular country or region. In particular, such an electronic system is useful to create multiple television feeds each having different advertisement content tailored according to an intended audience.
Considering the related art, there is still a difficulty in providing a reliable and effective mechanism for image content replacement. In particular, there is still a difficulty in providing a system which replaces image content in a way which is pleasing and unobtrusive for the viewer. It is now desired to provide an image content replacement apparatus and method which addresses these, or other, limitations of the current art, as will be appreciated from the discussion and description herein.
According to the present invention there is provided an apparatus and method as set forth in the appended claims. Other features of the invention will be apparent from the dependent claims, and the description which follows.
In one aspect there is provided an improved mechanism for replacing content within camera video images. The mechanism may select most-appropriate or best-fit substitute image content for a particular point in time. The substitute content may be selected by considering the current field of view of the camera images and/or a position or orientation of a subject with respect to the field of view. The substitute content may be selected based on telemetry from the camera and/or by analysing the video images themselves. The mechanism may locate, define and replace one or more areas within a moving image which correspond to the subject or subjects.
In one embodiment, the subject is a billboard. In one example, a subject billboard reflects or emits electromagnetic radiation in one or more predetermined wavelength bands. A camera observes the subject to provide camera video images. At least one detector unit also observes the scene to derive a detector signal relating to the radiation from the subject to thereby distinguish the subject from its surroundings. A content replacement apparatus selectively replaces one or more marked areas within the camera video images with alternate image content, such as displaying an alternate advertisement on the billboards, according to a mask signal that is accurately and efficiently identified by the detector signals.
In one aspect there is provided an image content replacement apparatus. A camera image receiving unit receives video images observing a scene including a subject, a mask signal generating unit generates a mask signal that defines marked areas of the video images corresponding to the subject, a content substitution unit substitutes the marked areas with alternate image content according to the mask signal to output modified video images, and an image selector unit selects the alternate image content amongst at least a first alternate image content when the subject is determined to be in a first condition within the scene and a second alternate image content when the subject is determined to be in a second condition within the scene.
In one example, the image selector unit selects the alternate image content at a scene change point of the video images. A scene change point may be a point in time when the video images change significantly. In one example, a scene change point may occur at a point in time when the video images change from one camera to another camera. In one example, the image selector unit may select the alternate image content at a scene change point of the video images according to the camera that is currently used to provide the video images among a set of cameras.
In one example, the image selector unit is arranged to obtain a camera zoom signal defining a relative size of the subject within the video images and to select amongst the first and second alternate images based on the camera zoom signal. The camera zoom signal may define a relative height of the subject within the video images. In one example, the camera zoom signal is based on a camera telemetry signal which defines a focal length of a camera which observes the scene to provide the video images.
In one example, the image selector unit selects the first alternate image content when the subject is detected to be fully visible within the video images and selects the second alternate image content when the subject is detected to be partially obscured within the video images.
In one example, image selector unit selects the first alternate image content when the subject is detected to be fully visible within the video images and selects the second alternate image content when the subject is detected to be incomplete within the video images.
In one example, the image selector unit detects the subject within the video images using the masking signal.
In one example, the image selector unit obtains a camera angle signal defining a relative angle of the camera with respect to the subject within the video images, and selects amongst the first and second alternate images based on the camera angle signal.
In one example, the camera angle signal defines a shooting angle of a camera which observes the scene to provide the video images. The shooting angle may be derived from a camera telemetry signal of the camera. The camera angle signal may be a pan or tilt signal from the camera.
In one example, the image selector unit selects amongst a sequence of replacement images which are triggered by the current value of the camera angle signal.
In one example, the image selector unit selects the first alternate image content when the subject is detected to be substantially planar to an image plane of the video images and selects the second alternate image content when the subject is detected to be at an acute angle with respect to the image plane of the video images.
In one aspect there is provided an image content replacement method. In the method, video images are provided from a camera of a scene including a subject. A mask area is defined corresponding to the subject within the scene, such as by providing a masking signal. A chosen alternate image is selected amongst at least a first alternate image content when the subject is determined to be in a normal condition within the scene and a second alternate image content when the subject is determined to be in an exceptional condition within the scene. The mask area in the video images is substituted with the chosen alternate image content.
The method may include obtaining a camera zoom signal defining a relative size of the subject within the video images, and selecting amongst the first and second alternate images based on the camera zoom signal. The camera zoom signal may be compared against a threshold to select amongst the first and second alternate images. In one example, the camera zoom signal defines a height of the subject within the video images. In another example, the camera zoom signal comprises a camera telemetry signal which defines a focal length of the camera.
The method may include detecting that the subject is partially obscured within the video images. The method may include generating a masking signal which defines the mask area of the video images and detecting that the subject is partially obscured within the video images using the masking signal.
The method may include choosing the first alternate image content when the subject is detected to be fully visible within the video images and choosing the second alternate image content when the subject is detected to be partially obscured by another object within the video images. The method may include defining a prime visible area of the subject using the masking signal, and comparing the prime visible area of the subject with a prime area of each of the first and second replacement images.
The method may include detecting that the subject is incomplete within the video images. The method may include generating a masking signal which defines the mask area of the video images and detecting that the subject is incomplete within the video images as the exceptional condition using the masking signal.
The method may include obtaining a camera angle signal defining a relative angle of the camera with respect to the subject within the video images, and selecting amongst the first and second alternate images based on the camera angle signal. The camera angle signal may define a shooting angle of the camera. The camera angle signal may be derived from a camera telemetry signal. The camera angle signal may be based on a current pan angle and/or current tilt angle of the camera. The method may include providing replacement images in a sequence triggered by the camera angle signal.
In this method, the selecting step may be performed at a scene change point of the video images.
In one aspect there is provided a tangible non-transient computer readable medium having recorded thereon instructions which when executed cause a computer to perform the steps of any of the methods defined herein.
The example embodiments will be described with reference to a content replacement apparatus and method used to replace content within television video images, particularly to provide photo-realistic replacement of a billboard. However, the apparatus described herein may be applied in many other specific implementations, which may involve other forms of video images or relate to other subjects of interest, as will be apparent to persons skilled in the art from the teachings herein.
In the illustrated example embodiment, the observed subject of interest is a billboard 10 which carries original content 11 such as an advertisement (in this case the word “Sport”). The billboard 10 and the original content 11 are provided to be seen by persons in the vicinity. For example, many billboards are provided at a sporting stadium or arena visible to spectators present at the event. In one example, the billboards are provided around a perimeter of a pitch so as to be prominent to spectators in the ground and also in TV coverage of the event.
A television camera 20 observes a scene in a desired field of view to provide a respective camera feed 21. The field of view may change over time in order to track a scene of interest. The camera 20 may have a fixed location or may be movable (e.g. on a trackway) or may be mobile (e.g. a hand-held camera or gyroscopic stabilised camera). The camera 20 may have a fixed lens or zoom lens, and may have local pan and/or tilt motion. Typically, several cameras 20 are provided to cover the event or scene from different viewpoints, producing a corresponding plurality of camera feeds 21.
The billboard 10 may become obscured in the field of view of the camera 20 by an intervening object, such as by a ball, person or player 12. Thus, the camera feed 21 obtained by the camera 20 will encounter different conditions at different times during a particular event, such as (a) the subject billboard moving into or out of the field of view, (b) showing only part of the subject (c) the subject being obscured, wholly or partially, by an obstacle and/or (d) the observed subject being both partially observed and partially obscured. Hence, there is a difficulty in accurately determining the position of the desired subject within the video images of the captured camera feed 21, and so define a masking area where the content within the captured feed is to be electronically replaced with alternate image content. There is a difficulty in providing substitute content smoothly and unobtrusively, e.g. so the viewer can continue watching the game without being unduly distracted by the electronic replacement of billboard advertisements. Further, there is a difficulty in providing substitute content which is in itself interesting and attractive for the viewer.
As shown in
The vision mixer 30 is operated by a vision engineer to select amongst the camera feeds 21 at each point in time to produce a clean feed 31, also known as a director's cut clean feed.
The vision mixing system 30 may incorporate, or be coupled to, a graphics generator unit which provides a plurality of graphics layers 22, such as a station logo (“Logo”), a current score (“Score”) and a pop-up or scrolling information bar (“News: story1 story2”). Typically, the one or more graphics layers 22 are applied over the clean feed 31 to produce a respective dirty feed 32. A separate graphics computer system may produce the graphics layers 22, and/or the graphics layers 22 may be produced by the vision mixer 30. The graphics layers 22 may be semi-transparent and hence may overlap the observed billboard 10 in the video images. The graphics layers 22 may be dynamic, such as a moving logo, updating time or current score information, or a moving information bar. Such dynamic graphics layers give rise to further complexity in defining the desired masking area at each point in time.
The dirty feed 32 is output to be transmitted as a broadcast feed, e.g. using a downstream broadcast delivery 50. The dirty feed 32 may be broadcast live and/or is recorded for transmission later. The broadcast delivery system 50 may distribute and deliver the feed 32 in any suitable form including, for example, terrestrial, cable, satellite or Internet delivery mechanisms to any suitable media playback device including, for example, televisions, computers or hand-held devices. The broadcast feed may be broadcast to multiple viewers simultaneously, or may be transmitted to users individually, e.g. as video on demand.
A content replacement apparatus 40 is arranged to identify relevant portions of received video images corresponding to the observed subject of interest 10, and to selectively replace the identified portions with alternate content 42. In this case, the content replacement apparatus 40 receives a video image feed 31 and identifies therein the billboard 10 as the subject of interest. These video images are modified so that the billboard 10, which originally displayed the word “Sport”, now appears to display the alternate content 42, as illustrated by the word “Other”.
In this example, the content replacement apparatus 40 is coupled to receive video images 31 from the vision mixer 30 and to return amended video images 41 to the vision mixer 30. The content replacement apparatus 40 may be combined with the vision mixer 30, or may be provided as a separate and isolated piece of equipment. The content replacement apparatus 40 may be provided in the immediate vicinity of the vision mixer 30, or may be located remotely. The content replacement apparatus 40 may receive video images directly from the vision mixer 30, or via one or more intermediate pieces of equipment. The input video images 31 may be recorded and then processed by the content replacement apparatus 40 later, and/or the output images 41 may be recorded and provided to the vision mixer 30 later.
In the example embodiment, the content replacement apparatus 40 receives the clean feed 31 directly from the vision mixer 30 and produces a modified clean feed 41 as output. The graphics layers 22 are then added to these modified video images 41 through the graphics overlay unit 30b to create a modified dirty feed 33 ready for broadcast. In another example embodiment, the content replacement apparatus 40 receives both the clean feed 31 and the dirty feed 32, substitutes the subject 10 of interest, and then restores the graphics layers 22.
Many other specific configurations will be apparent to those skilled in the art. For example, the content replacement apparatus 40 may be provided prior to the mixer 30 and thus provide the alternate image feed 41 as an input to the mixer 30. In this case the mixer 30 may then apply the graphics layers 22 over the already modified video images 41 to produce the modified dirty feed. However, such as system then tends to be limited in the number of alternate dirty feeds 33 based on the capabilities of the mixer 30. By contrast, placing the content replacement apparatus 40 after the mixer 30 as illustrated in
In the example embodiment, a high value is achieved when images of a sporting event, such as a football or soccer match, are shown live to a large audience. The audience may be geographically diverse, e.g. worldwide, and hence it is desirable to create multiple different alternate broadcast feeds 33 for supply to the broadcasting system 50 to be delivered in different territories using local delivery broadcast stations 51, e.g. country by country or region by region. In a live event, the content replacement apparatus 40 should operate reliably and efficiently, and should cause minimal delay.
In the example embodiments, the alternate content 42 comprises one or more still images (e.g. JPEG image files) and/or one or more moving images (e.g. MPEG motion picture files). As another example, the alternate content 42 may comprise three-dimensional objects in a 3D interchange format, such as COLLADA, Wavefront OBJ or 3DS. The alternate content 42 is suitably prepared in advance and recorded on a storage medium 49 coupled to the content replacement apparatus 40. Thus, the content replacement apparatus 40 produces one or more output feeds 41 where the observed subject 10, in this case the billboard 10, is replaced instead with the alternate content 42. Ideally, the images within the alternate feed 41 should appear photo-realistic, in that the ordinary viewer normally would not notice that the content carried by the billboard 10 has been electronically substituted. Hence, it is important to accurately determine a masking area defining the position of the billboard 10 within the received video images input to the content replacement apparatus 40. Also, it is important to identify accurately when portions of the observed subject 10 have been obscured by an intervening object 12 such as a player, referee, etc. Notably, the intervening object or objects may be fast-moving and may appear at different distances between the camera 20 and the subject 10. Further, it is desirable to produce the alternate feed 41 containing the alternate content 42 in a way which is more agreeable and/or less obtrusive for the viewer.
As shown in
The detector signals 61 may be derived from any suitable wavelength radiation. The wavelengths may be visible or non-visible. In the following example embodiment, the detector signals 61 are derived from infra-red wavelengths, and the detector signals 61 are infra red video signals. Another example embodiment may detect ultra-violet radiation. In one example embodiment, polarised visible or non-visible radiation is detected. A combination of different wavelength groups may be used, such as a first detector signal derived from any one of infra-red, visible or ultra-violet wavelengths and a second detector signal derived from any one of infra-red, visible or ultra-violet wavelengths.
In the illustrated example embodiment, one or more detectors 60 are associated with the camera 20. In the example embodiment, each camera 20 is co-located with at least one detector 60. The detector 60 may survey a field of view which is consistent with the field of view of the camera 20 and so include the observed subject of interest 10. The detector field of view and the camera field of view may be correlated. Thus, the detector signals 61 are correlated with the respective camera feed 21. In the example embodiment, the detector signals 61 are fed to the content replacement apparatus 40. In the example embodiment, the detector signals 61 are relayed live to the content replacement apparatus 40. In another example embodiment, the detector signals 61 may be recorded into a detector signal storage medium 65 to be replayed at the content replacement apparatus 40 at a later time.
In this example, the subject billboard 10 comprises a substrate which carries a printed medium, such as a printed sheet, to display a desired printed message or advertisement. The billboard 10 may be passive, being illuminated by ambient radiation (e.g. from natural sunlight or stadium lights) and reflecting the ambient radiation toward the camera 20 and detector 60. Alternately, the billboard 10 may be active by including a plurality of light units, such as light emitting diode (LED) packages. A lens unit and/or a diffuser (not shown) may be provided to distribute light from the LED units evenly across an illuminated area of the billboard. These light units may form a light box to illuminate the printed sheet from behind with infra-red light.
In the example embodiment, at least one infra-red detector 60 is associated with each of the cameras 20, producing one or more streams of the detector signals 61. As an example, the one or more detectors 60 may be narrow-spectrum near infra-red (NIR) cameras. The detector 60 may be mounted adjacent to the camera 20 so as to have a field of view consistent with the camera 20 and/or may share optical components with the camera 20.
The detector 60 may be arranged to move with the camera 20, e.g. to follow the same pan & tilt motions. In the example embodiments, each of the cameras 20 may provide a telemetry signal 22 which records relevant parameters of the camera, such as the focal length, aperture, motion and position. In one example, the telemetry signal 22 includes pan and tilt information. The telemetry 22 may also include zoom information or zoom information may be derived from analysing the moving images themselves. The telemetry 22 may be used, directly or indirectly, to calculate or otherwise provide pan, roll, tilt and zoom (PRTZ) information. The camera telemetry signal 22 may be passed to the content replacement apparatus 40, directly or via an intermediate storage or recording, in order to provide additional information about the field of view being observed by the camera 20.
In the example embodiment, the content replacement apparatus 40 comprises a camera image receiving unit 44, a signal processing unit 45, a mask signal generating unit 46, and a content substitution unit 47.
The camera image receiving unit 44 receives video images 21, which in this case are the video images taken by the cameras 20 to provide respective camera feeds. As described above, the camera feeds 21 may be multiplexed together to provide a clean feed 31 comprising moving images from different cameras 20 at different points in time. The clean feed 31 may be modified with additional graphics layers to produce a dirty feed 32. The camera images 21, the clean feed 31 and/or the dirty feed 32 may be provided to the content replacement apparatus 40, depending upon the nature of the installation.
The signal processing unit 45 receives signals which allow the subject billboards 10 to be identified within the video images 21. As will be discussed in more detail below, the signal processing unit 45 may process the infra-red detector signals 61 and/or the camera telemetry signals 22.
In the example embodiment the signal processing unit 45 comprises a detector signal processing unit 45a and a telemetry signal processing unit 45b.
The detector signal processing unit 45a processes the stream of detector signals 61 produced by the one or more detectors 60. In the example embodiments, the scene observed by the detector signal 61 is consistent with the scene in the video images 21 from the cameras 20. The detector signal processing unit 45a may spatially and/or temporally correlate the detector signals 61 with the video images 21. The detector signals 61 are preferably digital, or are digitised by analogue-digital conversion, thereby representing the field of view as an array of digital pixel values each representing an intensity of the detected radiation. As noted above, in the example embodiments the detector signals are based on infra-red wavelengths and thus represent an intensity of the selected infra-red wavelengths at each pixel value.
Meanwhile, the telemetry signal processing unit 45b receives the telemetry signals 22 produced by the cameras 20. In particular, the telemetry signals 22 provide dynamic information concerning the field of view observed by the video images 21 and, consequently, the current field of view of the detector signals 61.
The telemetry signal processing unit 45b may use the received telemetry signals 22 to establish a location of the subject 10 relative to the observed field of view in the video images 21. In the example embodiments, the telemetry signal processing unit 45b is provided in advance with 3D coordinates defining a location of the or each subject billboard 10 and the or each camera 20 within a 3D spatial environment, which allows the relative locations of these components to be established within a defined consistent three dimensional space. The system may be calibrated in advance such that an optical centre of the lens of the camera 20 is known. In one example, a pin hole camera mathematical model is applied in order to calculate a projection or mapping of the subject billboard 10 from the real world onto the image plane in the field of view of the camera 20 at a default starting position.
In the example embodiments, the telemetry signal processing unit 45b then actively estimates a position of the subject 10 within the field of view of the camera 20 as the camera is moved, according to the telemetry signals 22. These calculations allow the system to estimate an approximate position of the subject 10 within the video images 21.
The mask signal generating unit 46 generates a mask signal 43 to be applied to video images 21. In particular, the mask signal 43 is generated based on the detector signals 61, and may be enhanced by also considering the telemetry signals 22.
The masking area signal 43 is itself a useful product of the system and can be output or recorded in a storage unit 50 to be used later (see
In the example embodiments, the content substitution unit 47 electronically substitutes one or more of the masked areas within the video images 21 with the alternate image content 42 according to the masking signal 43. Thus, the content substitution unit 47 in use produces the respective alternate video image feed 41.
In one aspect, the content substitution unit 47 comprises an image selector unit 48 which determines that a predetermined special case or exceptional condition has arisen which needs special handling within the content substitution unit 47. The image selector unit 48 may generate an image selection signal which distinguishes at least between first and second conditions, such as between a normal situation on the one hand and an exceptional situation or special case situation on the other. In response, the content substitution unit 47 selects and applies appropriate replacement content 42, e.g. selects amongst normal and exceptional substitute images, according to this special case selection signal. The content substitution unit 47 substitutes the identified area within the video images 21 according to the mask signal 43 using the identified replacement image content 42 as selected by the image selector unit 48.
As will be discussed below, the image selector unit 48 advantageously uses the camera telemetry to provide various enhancements within the content replacement apparatus 40. However, other embodiments are also envisaged which do not rely on the camera telemetry 22 and instead derive relevant signals or information directly from the camera images 21.
Zoom/Focal Length
In this example, video images 21a and 21b show the same scene at two different camera focal lengths, and thus different amounts of zoom. The image selector unit 48 is arranged to select from amongst available replacement content images 42a, 42b accordingly, so that a best-fit substitute is provided for each respective image or image sequence.
This mechanism is particularly useful in relation to cameras with a powerful zoom facility. The focal length of the camera 20 is a primary factor in determining whether the subject 10 will be visible distantly, as a normal case, or whether the subject 10 will instead be viewed in close up at this time. A camera 20 which observes a stadium or event with a wide field of view will tend to observe several billboards 10 distantly in their entirety, whereas the same camera when with a high zoom value (long focal length) has a restricted field of view and will tend to capture only one of the subject billboard 10 in full. In this example, the system is capable of displaying selectively, for the same subject billboard 10, either the first alternate image 42a or the second 42b. In this case, the first image 42a is more appropriate to being viewed from a distance and contains the text “Other” or some suitable simplified message. The second alternate image 42b is more appropriate to be viewed in close-up and thus may contain more detailed text or images, such as, in this example, “Other image . . . just for you”.
In a first example embodiment, the focal length Z of the camera 20, as derived from the telemetry signals 22, is compared against a threshold value Tz which distinguishes between a normal wide field of view and an exceptional narrow field of view. The threshold Tz may be set in advance according to the conditions of the location of the scene, such as by testing the cameras 20 prior to a live event. The telemetry thus provides a camera zoom signal. The current focal length Z is compared against the predetermined threshold by the image selector unit 48. In response, the image selector unit 48 selects the replacement image 42 within a normal set 42a or an exceptional set 42b. In other words, comparing the telemetry against a predetermined threshold determines a selection between at least first and second substitute images 42a, 42b. In the example mechanism, this selection allows a best fit of the relevant alternate image against the subject 10 which is currently in view.
In a second example embodiment, the image selector unit 48 uses the masking signal 43 to identify the special case or exceptional condition. In this case, the camera zoom signal is derived from the camera images 21. As an example, the subject billboard 10 is determined to appear at a region of the current image 21 according to the masking signal 43, and thus it is determined that the subject 10 will be visible in this frame or sequence of frames as a proportion of the visible area of the image. In the example embodiments, the subject billboards 10 have constant physical dimensions. Thus, a current height H of the subject billboards may be determined with reference to a vertical orientation of the image. The height H may be used in this situation as a useful indicator as well as or in place of the current camera focal length Z. The determined current height H may be expressed, for example, as a number of pixels or as a percentage of the full image height. The height H may be compared against a respective threshold value TH. As an example, an exceptional condition is considered to apply when the subject billboard is say 10% or say 20% or more of the total height of the screen. A replacement image content 42a or 42b is selected accordingly, ready to be applied to the subject 10 in view.
In one aspect, more than one threshold value may be applied. However a single threshold is preferred in the example embodiments for simplicity. The threshold is convenient to determine whether the current tested value, e.g. zoom Z or subject height H, is within a first or a second range and to select first or second alternate images 42a or 42b accordingly.
In one aspect, the image selector unit 48 determines whether or not a special case applies at a scene change point, namely at a point in time when the video images 21 changes significantly. As will be familiar in the context of video editing, a scene change point occurs at a point in time such as when the current image feed changes from one camera to another camera. Making the determination at the scene change point minimises disruption for the viewer and is least noticeable. Hence, using the scene change point as a trigger for the determination improves a photorealistic effect. The determined replacement content 42a or 42b is then maintained until the next scene change point. That is, even if the camera now changes focal length and moves from a high zoom or narrow point of field of view (high Z or H value) and returns towards a wide field of view or normal condition (low Z or H value) all within a single scene, then the selected replacement image 42b is maintained until the next scene change point, at which point in time the determination is made again. This mechanism also inhibits unwanted oscillations between images, such as where the tested Z or H value is close to the threshold T.
As one example embodiment, the alternate image content 42 may comprise moving images which loop or repeat after a defined period of time (e.g. 3 or 5 seconds). Suitably, the determination is made at a scene change point of the replacement media. Using certain kinds of video adverts, the scene change can be allowed to occur after the advert file has looped, i.e. when the video has reached its end and before that particular video sequence starts playing from the beginning again.
In another example embodiment, the image selector unit 48 may select the alternate image content at a scene change point of the video images according to the camera that is currently used to provide the video images, amongst a plurality of cameras. This embodiment considers the situation where a first camera is provided to take close-up images with a long focal length, while a second camera has a wide field of view. In this case the alternate image content 42a, 42b may be selected based on a camera signal C which identifies the camera currently in use.
Partially Obscured Subjects
In this example, if the obscuring object 12 now moves, then typically it will be more appropriate and least noticeable to maintain the same selected replacement content until a next scene change point. However, at other times it will be appropriate to re-evaluate the subject 10 according to the changing position of the obstacle 12 and select a new best fit replacement image even within a single scene.
Incomplete Subjects
Action Following
In one example embodiment, a shooting angle or shooting direction of the camera 20 is determined by the telemetry 22. The current pan angle P or tilt angle T may be used to infer the current relative location of the billboard 10 within the image 21.
As shown in
Further, as another example, the alternate content 42 may comprise three-dimensional objects in a 3D interchange format, such as COLLADA, Wavefront OBJ or 3DS. These 3D-adverts allow the internal scale and orientation of the replacement media to be manipulated, based on the telemetry input data. As an example, a 3D text logo advert can be made to smoothly follow or “watch” the center of the visible screen.
This embodiment provides functionality which is not only pleasing for the viewer but further enhances an experience of the viewer in the alternate electronically modified broadcast feed.
Acute Angled Subjects
In the example embodiments, the exceptional condition for awkwardly angled subjects is identified by the telemetry 22 which reveals a current shooting direction of the camera 20. Given the known relative physical locations of the subject 10 and camera 20, appropriate threshold pan or tilt values can be predicted by 3D geometric modelling as discussed above. The exceptional case can thus be detected in use with this knowledge of the geometry of the scene including camera locations in relation to the subject billboards 10. Further, testing the cameras 20 in advance of a live event allows the threshold pan P and/or tilt T values to be determined at which the awkwardly angled subjects 10f will appear. The replacement images may then be selected accordingly for those subjects. Thus, the system identifies whether the respective subject 10 within the received image frame 21 will appear normally or will appear in the exceptional geometric condition.
In the method, video images are provided from a camera 20 of a scene including a subject 10 at step 901. At step 902, a mask area is defined corresponding to the subject within the scene, such as by providing a masking signal 43. At step 903, a chosen alternate image 42 is selected amongst at least a first alternate image content 42a when the subject 10 is determined to be in a first or normal condition within the scene and a second alternate image content 42b when the subject is determined to be in a second or exceptional condition within the scene. At step 904, the mask area in the video images 21 is substituted with the chosen alternate image content 42.
The method may be augmented by any of the further steps as discussed herein. For example, the method at step 903 may include obtaining a camera zoom signal defining a relative size of the subject within the video images, and selecting amongst the first and second alternate images based on the camera zoom signal.
At step 903, the method may include obtaining a camera angle signal defining a relative angle of the camera with respect to the subject within the video images, and selecting amongst the first and second alternate images 42a, 42b, etc, based on the camera angle signal. The camera angle signal may define a shooting angle of the camera. The camera angle signal may be derived from a camera telemetry signal 22. The camera angle signal may be based on a current pan angle and/or a current tilt angle of the camera 20. The method may include providing replacement images in a sequence triggered by the camera angle signal.
The industrial application of the example embodiments will be clear from the discussion herein.
At least some embodiments of the invention may be constructed, partially or wholly, using dedicated special-purpose hardware. Terms such as ‘component’, ‘module’ or ‘unit’ used herein may include, but are not limited to, a hardware device, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks. Alternatively, elements of the invention may be configured to reside on an addressable storage medium and be configured to execute on one or more processors. Thus, functional elements of the invention may in some embodiments include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Further, although the example embodiments have been described with reference to the components, modules and units discussed herein, such functional elements may be combined into fewer elements or separated into additional elements.
Although a few example embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
1210332.1 | Jun 2012 | GB | national |
This application is a continuation of U.S. application Ser. No. 14/407,235, filed Dec. 11, 2014, which was the National Stage of International Application No. PCT/EP2013/062184, filed Jun. 12, 2013, which claims the benefit of G.B. Application No. 1210332.1, filed Jun. 12, 2012, the disclosures of each of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 14407235 | Dec 2014 | US |
Child | 15875096 | US |