The following relates to systems and methods for tracking the speed of an object using video, particularly spherical sports ball.
In sports training and competition it is often desirable to measure and track the speed of the ball being used. For example, one may want to determine how fast is a pitch thrown by a baseball pitcher, or how fast was a serve in tennis, a drive in golf, a kick in soccer (football), etc.
There are several existing techniques for measuring ball speed. For example, radar guns have traditionally been used to measure speed using the Doppler effect, but require dedicated equipment (i.e. the radar gun) and someone to operate the equipment. Moreover, radar guns that have a desirable accuracy are known to be expensive and therefore are typically not accessible to recreational players or teams. Lower cost models can be used, but typically do not provide the accuracy that is desired.
Imaging-based techniques have also been attempted, e.g., by tracking speed from a video. However, such techniques have been found to require a complex model of the movement, or environmental data and other inputs in order to achieve a certain level of accuracy.
It is therefore desirable to enable an accurate ball speed measurement without requiring complex and/or expensive equipment or the burden of obtaining extrinsic data that can change depending on where the technique is used.
In one aspect, there is provided a method for determining ball speed, comprising obtaining a video having a plurality of frames; analyzing each frame of the video to detect a position and a size of a ball moving through the frames; calculating the ball speed using the position and size determined from the frames, and a predetermined true size of the ball associated with the video; and outputting the speed of the ball.
In other aspects, there are provided a computer readable medium and electronic device for performing the method.
Embodiments will now be described with reference to the appended drawings wherein:
The following provides a system and method for tracking moving objects using video, particularly for tracking the speed and movement of a sports ball, e.g. during a competitive or training event. The method described herein can be applied without requiring extrinsic data about the environment or conditions in which the ball is being used but rather tracks and speed and movement using video and predetermined knowledge of the true size of the ball. In this way, existing and conveniently available equipment such as smart phones and personal computing equipment can be used to obtain and process video in order to determine, output (e.g. display) the results.
Turning now to the figures,
The ball 10 has a travelled path 14 and a projected path 16, which is viewable and recordable within a field of view (FOV) 18 of an imaging device 20 such as a camera. Preferably, as shown in
The imaging device 20 may capture a video of the ball-movement event (e.g., throw, kick, pitch, serve, drive, etc.), and subsequently process that video, or may acquire a live or substantially live video stream that is processed “on-the-fly” in order to track the speed of the ball 10 and report same. Other information that can be detected from the video, such as the angle of movement, etc., can also be determined and reported, e.g., by displaying the data and information to an operator of an electronic device as will be explained below.
The imaging device 20 provides the video data to a speed module 22 for detecting the ball 10 and tracking its speed, and typically also provides the video as an output on a display 24 to enable the video to be viewed by a user. Various user inputs 26 may also be provided to the speed module 22, e.g. via a touch-sensitivity inherent in the display 24 or by some other input mechanism such as a keyboard, button, mouse, etc. It can be appreciated that the manner in which the speed module 22 and interactivity therewith is deployed can vary, and the examples provided herein are for illustrative purposes. For example, in a different implementation, only the imaging device 20 may be located at the sports field or arena, with the video data being sent “offsite” to be processed by the speed module 22 located in a separate computing device or computing service.
The electronic device 30 also stores or otherwise obtains or has access to a video file 36 that is input to the speed module 22 for tracking a ball 10 captured by that video file 36. It can be appreciated that the video file 36 can be stored in memory or provided via incoming streaming data, via a communication connection, or directly from the imaging device 20. For example, the video file 36 can be generated and stored while recording a sporting event, and used by the speed module 22 to perform a speed tracking operation. As such, it can be appreciated that the configuration shown in
The speed module 22, in addition to performing image processing, can provide or otherwise coordinate with an app having a user interface (UI) displayed using the display 24, and enabling a user to interact with the speed module 22 for performing tracking operations. Screen shots of example UIs are provided in
In
After selecting one of the options 44, or entering a ball size in the entry box 42 in the UI 40, a tracking type UI 50 may be displayed as shown in
The tracking process employed by the speed module 22 in this example, is shown in
At step 100, a video is recorded or otherwise obtained. As discussed above, a video file 36 may have been previously recorded, or the video may be captured and processed in real- or substantially real-time. If a “live” video is being processed, it can be appreciated that the tracking process may lag the live video in order to obtain enough frames to detect the ball and calculate the speed. That is, the principles described herein can be adapted to provide ball speed tracking results in various scenarios and environments. For example, during a live video broadcast or recording, incoming video can be analyzed and the ball speed results determined and displayed after a pitch or serve has occurred and the processing has been completed.
At step 102, the video can optionally be cropped to select only or substantially only the ball 10 and path 14, 16 of the ball 10. For example, when analyzing a video file 36, a user can optionally choose to perform a manual cropping operation as illustrated in
The movement of the ball 10 is then tracked across at least a portion of the frames of the video at step 104. Various operations can be performed at step 104, as exemplified in
(1) The ball size and location across the frames; and
(2) The true size of the ball 10.
To consider the advantage of knowing the ball size, consider the following example. In this example, if the ball is calculated to be X pixels in diameter, and a real ball is Y millimetres in diameter, then the conversion is now Y/X millimetres per pixel. Next, if one has a distance Z in pixels, then Z*(Y/X) gives us the same distance in millimetres. If a camera has a frame rate of K fps, and the ball travels Z pixels over T frames, then the speed of that ball is estimated at Z*(Y/X)/(T*(1/K)) mm/s.
The true size of the ball 10 is provided as an input at step 110 and is typically stored on the device and provided as an input from predetermined ball sizes or a manually entered ball size as illustrated in the UI 40 in
An example sub-routine for performing the tracking operations of steps 104 and 106 is shown in
At step 200 the video (or portion thereof) is read frame-by-frame into a tracking algorithm executed by the speed module 22. Each frame is then cropped and rotated at step 202, in order to provide an upright view of the search region for the ball 10. This may be done, for example, to detect left/right movement. In some cases, a mobile phone stores a video in a sideways orientation, even if recorded upright, and this step can compensate/correct for that.
The start frame is then determined in order to identify a number of frames prior to this start frame (e.g. 10) for training an adaptive background subtraction model at step 204. Various available models can be used, for example from OpenCV. The background subtraction model is used to determine background features that can be subtracted from the frame when tracking the ball 10 in the frames.
With the background subtraction model trained, and the start frame is reached, each frame is then processed at step 206 to apply: background subtraction to remove unwanted background detail, noise removal, and motion filtering. It can be appreciated that removal of noise can include removal of noise from small movements in the camera, in the background. Methods such as “Erosion” and “Dilation” can be applied here. The motion filtering can be done by subtracting adjacent frames and applying a blurring filter.
With the background subtraction and noise removal operations applied, the remaining contours in the image are considered at step 208 as feasible detections of the ball 10 in each frame. The predictions are generated using contour finding, wherein the largest contour in the image is considered to be the ball 10.
With the ball locations (path) and sizes determined from the frames using the above process, an estimate for ball speed and angle of movement are calculated, as indicated above, which takes into account the true ball size. The size of the contour and the size of the ball is used to generate a conversion between pixels and distance, as exemplified above. Then, the center of the ball 10 is tracked across pixels and its location over time is used to calculate the speed.
As shown in
For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. However, it will be understood by those of ordinary skill in the art that the examples described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the examples described herein. Also, the description is not to be considered as limiting the scope of the examples described herein.
It will be appreciated that the examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein. For instance, components and modules can be added, deleted, modified, or arranged with differing connections without departing from these principles.
It will also be appreciated that any module or component exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the electronic device 30, speed module 22, imaging device 20, any component of or related thereto, etc., or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.
The steps or operations in the flow charts and diagrams described herein are just for example. There may be many variations to these steps or operations without departing from the principles discussed above. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified.
Although the above principles have been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art as outlined in the appended claims.
The present application claims priority from U.S. Provisional Application No. 62/592,820 filed on Nov. 30, 2017, incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62592820 | Nov 2017 | US |