Having thus generally described the nature of the invention, reference will now be made to the accompanying drawings, showing by way of illustration a preferred embodiment thereof and in which:
Referring now to the drawings and more particularly to
For broadcasting the sport event, one or a plurality of video camera modules 12 are taking video images of the event. The object/person A of which the trajectory is of relative importance in the sport game is generally in the field of view of the video camera module 12 but it may or may not be visible (i.e., perceptible) in the video images because of the limited resolution of the video images, for example. When the object/person A travels, a monitoring module 14 passively tracks the object/person and measures the 3D position of the object/person A in time. As the event is being broadcast, a graphical representation of the trajectory or a graphical representation showing the position of the object/person A as it travels is depicted on the image to enhance the visibility of the object/person A on the broadcast image.
As known in the art, passive tracking includes methods where no special modification is required to the object to be tracked. One example of a passive tracking method is a stereoscopic method. In stereoscopic methods, the object is tracked in video images using pattern recognition. Active tracking methods includes methods where the tracking is assisted by a transmitting device installed on the object to be tracked.
In the embodiment of
In the stereoscopic embodiment, the position, orientation and zoom (i.e., tracking parameters) of the tracking cameras 24 in a global reference frame are known such that the three-dimensional position of the object in the global reference frame is calculable using triangulation techniques. In this embodiment, the orientation and the zoom of the tracking cameras 24 are variable (e.g., operators manually handling the cameras) as the object/person A travels, to maintain the object/person A in the field of view of the cameras 24. In an alternative embodiment, the position of the tracking cameras 24 can also be varied. In any case, as the object/person A moves along its trajectory, the tracking parameters (position, orientation and/or zoom) are monitored. In an embodiment, the tracking cameras 24 are motorized and automatically controlled to track the object/person A as it travels along its trajectory. All tracking parameters need to be pre-calibrated using, for instance, a pattern recognition method and known physical locations (i.e., ground points).
As the event is being broadcast, the measured 3D positions of the object/person A are cumulated as a function of time to provide a 3D trajectory of the object/person. The measured 3D position or trajectory is projected on the video image and a graphical representation of the trajectory or of the actual position of the object/person A is drawn on the image to enhance the visualization of the object/person A on the broadcast image. The graphical representation may be a curve, series of points, ghost of the object/person A or such, showing the trajectory of the object/person A or it may be a point or an image of the object/person A showing only the actual position of the object/person A. In an embodiment, the graphical representation of the trajectory is drawn in real-time on the video image, i.e., the up-to-date trajectory is graphically added to the video image as the object/person A travels. Alternatively, the graphical representation of the trajectory could appear on the video image at the end of the trajectory, e.g. when the ball arrives at destination (e.g., touches the ground) or when the athlete reaches the finish line.
In order to perform the projection, the view parameters (i.e., the position, orientation and zoom) of the video camera module 12, which provides the broadcast footage, are monitored in the global reference frame. In this embodiment, the orientation and zoom of the video camera module 12 are varied to select the appropriate view for broadcasting the event and the view parameters are monitored. Alternatively, the video cameras 18 could be fixed. In any case, all view parameters need to be pre-calibrated using, for instance, a pattern recognition method and known physical locations (i.e., ground points).
The video camera module 12 is provided for taking a video image framing the object/person for live broadcast of the event. As previously stated, the object/person may or may not be visible (i.e., perceptible) in the video image taken by the video camera module 12.
The monitoring module 14 measures a 3D position of the object/person in time and provides the 3D trajectory of the object/person.
The broadcasting image processing unit 16 renders a graphical representation of the trajectory or a graphical representation showing the position of the object/person A as it travels, on the video image.
The statistic/storage module 34 stores a plurality of object/person trajectories obtained at the sport event.
The video camera module 12 comprises at least one video camera 18. Images from a plurality of video cameras 18 can also be combined when producing the broadcast program. The view parameters of each video camera 18 can be varied (i.e., manually or automatically) as the location of the action of the game varies. More specifically, the position, orientation and/or the zoom of the camera are variable as a function of the footage gathered for the video broadcast.
Accordingly, a view parameter reader 22 is provided for each video camera 18 for reading the varying position, orientation and/or zoom. The view parameter reader 22 typically has encoders, inertial sensors and such for reading the orientation of the camera 18, and encoders for reading the zoom of the camera 18, i.e., the focal point of the camera's lens. In embodiments where the position of the video camera 18 is variable, the view parameter reader 22 typically has a positioning system (i.e., GPS or a local implementation).
The monitoring module 14 is a three-dimension measuring system. In an embodiment, the module 14 uses stereoscopy to measure the 3D trajectory of the object/person but any other 3D measuring method could alternatively be used. The monitoring module 14 uses at least two tracking camera modules 19 each having a tracking camera 24 for acquiring tracking images of the object/person, and an associated tracking parameter reader 23. The orientation and the zoom of the tracking cameras are controlled (e.g., manually) to allow an operator to follow the object/person A such that it is maintained in the field of view of the camera as it travels along the trajectory. The varying orientation and zoom of the tracking cameras in the global reference frame are monitored using the tracking parameter reader 23. Additionally, in an alternative embodiment, the position of the cameras can also be manually controlled and is monitored.
Like the view parameter reader 22, the tracking parameter reader 23 typically has encoders, inertial sensors and such for reading the orientation of the tracking camera 24, and encoders for reading the zoom of the tracking camera 24, i.e., the focal point of the camera's lens. In embodiments where the position of the tracking camera 24 is variable, the tracking parameter reader 23 also typically has a positioning system (i.e., GPS or a local implementation).
It is contemplated that, as the broadcast event goes on, the role of a video camera module 12 and of a tracking camera module 19 could be swapped at any time. Accordingly, at one time, a first camera could be used for providing the video image and, at another time, a second camera could be used for providing the video image while the first camera is used for providing a tracking image for measuring the position of the object/person.
A 3D trajectory processing unit 26 calculates the 3D position of the object/person A as it travels and comprises a trajectory memory 28, a 2D image processor 30, and a global position calculator 32. The 2D image processor 30 passively tracks the location of the object/person A in the tracking images using pattern recognition and provides a 2D position of the object/person in the image obtained from each of the cameras 24. The handling of the tracking cameras 24 for the tracking of the object/person A may be completely automated or may be operator assisted. For example, the operator could point out the position of the object/person on the image at the beginning of the trajectory, i.e., when the object/person A is still, and the 2D image processor 30 tracks the object/person A from that location.
The global position calculator 32 calculates the 3D position of the object/person in the global reference frame using triangulation techniques which are well known in the art. These methods basically use the 2D positions and the tracking parameters in order to obtain the 3D position of the object/person. The 3D positions are cumulated in the trajectory memory 28 to provide the 3D trajectory of the object/person A. The 3D trajectory is updated in real-time as the object/person travels and the up-to-date trajectory can thus be rendered on the broadcast image in real-time.
The broadcasting image processor 16 adds a graphical representation of the trajectory over the video image to be broadcast. Alternatively, a graphical representation showing the actual position only of the object/person A could only be added. The broadcasting image processor 16 is controlled by the operator of the system through a user interface 36. The operator may turn on and off the graphical representation and may add a statistic graphical representation as will be discussed further below.
In this embodiment, a 3D model 38 of the event venue is provided and taken into account in the graphic rendering. On segments of the trajectory where the object/person A is hidden by the 3D profile of the site (as seen by the video camera 18), the graphical representation is omitted. For example, if the object/person is behind a hill or a building, the trajectory is not drawn on the video image even though the trajectory is known (i.e., could be displayed). The 3D model 38 is thus used to improve the realism of the graphical representation.
As the sport event goes on, the various trajectories performed by various players or on various tries of the same player are typically stored in the statistic/storage module 34. This feature provides the option of superposing a graphical representation of the best up-to-date performance, for example, on the broadcast image for comparison purposes. The average performance of the actual player or any other trajectory may also be superposed. Superposing several trajectories on the live event image may also be performed, i.e., when the object/person starts its motion several trajectories are started at the same time and comparisons between several trajectories can be made in real-time. Any other statistic or numerical data that can be determined from the measured trajectory and that is relevant in the sport event can also be stored in the statistic/storage module 34. Such statistic includes the distance reached by the trajectory, the highest point of the trajectory, the maximum speed of the object/person along the trajectory, the time elapsed during the trajectory, etc.
An operator of the system controls the choices of displayed trajectories through an operator interface 36. The operator interface 36 is also used to associate each trajectory with the player that performed the trajectory and to other statistic data. The operator interface 36 can also be used to select between trajectory display and position display or between various styles of graphical representation.
It is contemplated that each 3D position may be stored in the trajectory memory 28 along with its associated time stamp for use, for example, in calculating statistic data. The data provided by the tracking camera modules 19 is preferably synchronized. Data provided by the video camera module 12 from at least one video camera and communications between the different modules of the system are preferably synchronized. It is contemplated that any appropriate synchronizing method known by one skilled in the art can be used.
Referring to
The broadcasting image processing unit 16 comprises a 2D projection renderer 40 and a graphical combiner 42. The 2D projection renderer 40 receives the 3D trajectory and the view parameters and projects the 3D trajectory in the global reference frame on the video image. The graphical combiner 42 adds a graphical representation of the trajectory on the video image or a graphical representation showing the actual position only of the object/person.
In order to combine the trajectory/position information to the video image, a 2D projection renderer 40 must associate the video image to the global reference frame. As discussed previously, the view parameters of the video camera 18 are known, as provided by the video camera module 12.
Accordingly, with the position, orientation and zoom of the video camera 18 in the global reference frame, provided from the view parameters, the 2D projection renderer 40 determines the projection parameters associated with the video image within the global frame of reference. The 2D projection renderer 40 then projects the 3D trajectory using the same projection parameters. A projected trajectory is thereby provided as 2D points associated to the video image.
The graphical combiner 42 adds a graphical representation of the trajectory to the video image or, alternatively, a graphical representation showing the actual position of the object/person. The graphical representation can for instance be a realistic rendering of the object/person as it progresses along the trajectory, a curve depicting the projected trajectory (i.e., a curve passing through sampled 2D points) or dots distributed along the projected trajectory (i.e., located on selected 2D points). The broadcasting image is therefore the video image with a graphical display representing the trajectory or, alternatively, the object/person.
Moreover, statistic data is provided from the statistic/storage module 34 to the 2D projection rendered 40. As commanded through the operator interface 36, statistic information may be added to the video image using the graphical combiner 42.
The system 10 for enhancing the visibility of an object/person on a video image used in broadcasting a sport event has numerous contemplated uses. For example, the system can be used in broadcasting a golf game or tournament by drawing the trajectory of the golf ball in the air on the video image. It can also be used for visualizing the object thrown in broadcasting discus, hammer or javelin throw, for visualizing the trajectory of the athlete in ski jump, the trajectory of the ball hit in baseball or the trajectory of the kicked ball in football or soccer. Another example is the trajectory of the athlete in alpine skiing competition.
It should be contemplated that, if only the actual position of the object/person is to be graphically displayed on the broadcast image, a trajectory memory is not required and the broadcasting image processing unit can rather receive the actual 3D position of the object/person instead of the 3D trajectory.
While illustrated in the block diagrams as groups of discrete components communicating with each other via distinct data signal connections, it will be understood by those skilled in the art that the preferred embodiments may be provided by a combination of hardware and software components, with some components being implemented by a given function or operation of a hardware or software system, and many of the data paths illustrated being implemented by data communication within a computer application or operating system. The structure illustrated is thus provided for efficiency of teaching the present preferred embodiment.
The embodiments of the invention described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the scope of the appended claims.