Golf hitting data analyzer

Information

  • Patent Application
  • 20240245974
  • Publication Number
    20240245974
  • Date Filed
    April 05, 2024
    8 months ago
  • Date Published
    July 25, 2024
    5 months ago
  • Inventors
    • Chen; Xiangyang
  • Original Assignees
    • Shanghai Qianrun Information Technology Co., LTD
Abstract
The present invention relates to the field of golf hitting data analysis technology and discloses a golf hitting data analyzer, which includes a sensor group, a static golf ball detector, an ultra high speed motion detector, a cascaded high-speed motion target tracker, a hitting parameter calculator, and an output. In the present invention, the advanced image processing capability of the GPU chip is utilized, combined with a color high-definition LED display screen to display hitting data. The hitting data is processed in real-time, discrete flight trajectories are calculated, and the three-dimensional flight trajectory of the golf ball is simulated in animation form, without the need for external simulator software, making it more convenient to carry and use.
Description
TECHNICAL FIELD

The present invention relates to the technical field of golf hitting data analysis, in particular to a golf hitting data analyzer.


BACKGROUND TECHNOLOGY

The Golf Hitting Data Analyzer will calculate and display various key parameters of the swing, including swing path, club face angle and sweet spot, club head speed, ball flight trajectory, and hitting distance, providing the most comprehensive and accurate analysis for swing practice, club teaching, and club specialty store selection. Foresight GC2 is a high-speed camera image that uses GCHawk to calculate the flight parameters of the golf ball at the moment of hitting. The principle of GCHawk is similar to that of GC2, The installation position is on the overhead ceiling; GCQ uses 4 cameras, and the additional pair of cameras is used to capture golf club face data. The drawbacks are:


1. The data of GC2 and GCQ is displayed as pure text without the function of calculating and displaying flight trajectories, which is not intuitive. If users want to view three-dimensional flight trajectories, they must connect external hardware such as mobile phones and computers, and run external simulator software to view them, which increases the complexity of use.


2. GC2 shows only 6 pieces of data, which is not enough to fully reflect the hitting parameters.


3. GC2 GCQ uses manually formulated feature values to distinguish between golf balls and backgrounds through infrared images. These feature values belong to the designer's experience and are not sufficient to distinguish some objects with similar infrared features. As a result, the ball measuring instrument is likely to be interfered by reflective objects such as white sneakers and smooth floors, resulting in incorrect judgment, inability to detect the ball, or incorrect positioning of the ball.


SUMMARY OF THE INVENTION

The present invention mainly solves the technical problems existing in the prior art and provides a golf hitting data analyzer.


In order to achieve the above objectives, the present invention adopts the following technical scheme: a golf hitting data analyzer, which includes a sensor group, a static golf ball detector, an ultra high speed motion detector, a cascaded high-speed motion target tracker, a hitting parameter calculator, and an output. The sensor group includes a first sensor and a second sensor, The cascaded high-speed motion target tracker includes a medium precision golf ball tracker and a high-precision golf ball tracker. The hitting parameter calculator includes a golf ball 3D posture calculator, a speed calculator, and a spin calculator. The output includes a parabolic calculator, a graphics display, and Ethernet, USB, Bluetooth, etc.


As a preferred option, the second sensor reads a standard resolution video stream, which is then fed into a static golf ball detector for target detection.


As a preferred option, the first sensor outputs a high-speed high-definition infrared camera and a variable resolution video stream.


As a preferred choice, the medium precision golf ball tracker filters the background and noise, calculates the pixel level accuracy of the golf ball position, and obtains a high-resolution filtered video stream.


As a preferred choice, the golf ball 3D pose calculator performs 3D pose estimation on these two sets of keyframes from two cascaded trackers, while the speed calculator and spin calculator read the positions of a set of spherical feature points in the two keyframes.


As a preferred option, the parabolic calculator, combined with air pressure parameters, continues to estimate the flight trajectory of the ball, which is then handed over to a graphical display to complete the 3D trajectory animation and the text overlay of the hitting parameters.


As a preferred option, real-time image data collected by high-speed high-definition infrared cameras and high-speed standard definition infrared cameras are sent in parallel to the FPGA field programmable logic gate array. Then the hitting data, as a result, is packed in certain data format and output to peripherals such as Ethernet, USB, Bluetooth.


Beneficial Effects

The present invention provides a golf hitting data analyzer. Has the following beneficial effects:


(1) The golf hitting data analyzer utilizes the advanced image processing capabilities of a GPU chip, combined with a color high-definition LED display screen to display hitting data. It processes hitting data in real-time, calculates discrete flight trajectories, and simulates the three-dimensional flight trajectory of a golf ball in animation form, without the need for external simulator software, making it more convenient to carry and use.


(2) The golf hitting data analyzer of this invention simultaneously displays 11 parameters related to hitting, which is 5 more than GCQ, and more comprehensively reflects the key attributes that affect the flight performance of golf balls.


(3) The golf hitting data analyzer of this invention has excellent anti-interference ability, and can accurately recognize and locate golf balls even in the presence of smooth ground, white sneakers, and other interfering objects in the background.


Illustration

In order to provide a clearer explanation of the embodiment of the present invention or the technical solutions in the prior art, a brief introduction will be given below to the accompanying drawings required in the description of the embodiment or prior art. It is obvious that the accompanying drawings in the following description are only illustrative. For ordinary technical personnel in this field, other implementation drawings can be obtained by extending from the provided drawings without creative labor.


The structure, proportion, size, etc. depicted in this manual are only intended to complement the content disclosed in the manual, for those familiar with the technology to understand and read, and are not intended to limit the conditions for the implementation of the present invention. Therefore, they do not have any technical substantive significance. Any modification of the structure, change of proportion relationship, or adjustment of size, without affecting the efficacy and goals that the present invention can produce, shall not affect the effectiveness and objectives that the present invention can achieve, All should still fall within the scope of the technical content disclosed in the present invention.






FIG. 1 is a technical solution diagram of the present invention;



FIG. 2 is a flowchart of the present invention;



FIG. 3 is a component diagram of the present invention;



FIG. 4 shows three frame images of the high-speed ball of the present invention;



FIG. 5 shows three frame images of the low-speed ball of the present invention.





LEGEND EXPLANATION

A1. Variable resolution video stream; A2. Standard resolution video stream; B. Standard resolution video stream; C. Low resolution video streaming; D. High resolution filtered video stream; E. Key frames after noise reduction and background filtering; F. Hitting parameter data.


Specific Implementation Methods

The following will provide a clear and complete description of the technical solution in the embodiment of the present invention, in conjunction with the accompanying drawings. Obviously, the described embodiment are only a part of the embodiment of the present invention, not all of them. Based on the embodiment in the present invention, all other embodiment obtained by ordinary technicians in the art without creative labor fall within the scope of protection of the present invention.


Implementation example: Golf hitting data analyzer, as shown in FIGS. 1 to 5, in an idle state, the device will continuously read a standard resolution video stream from the second sensor. The standard resolution video stream is sent to a static golf ball detector for target detection. When the presence of a static golf ball is detected, the system enters a ready state, and the second sensor stops outputting a standard resolution video stream, Change the output of low resolution video stream to ultra-high frame rate, low resolution video stream, and send it to the ultra-high speed motion detector for motion detection. When a motion event is detected, the system enters the lens data collection state, and the second sensor stops outputting low resolution video stream, changing the output of standard resolution video stream to high frame rate, standard resolution video stream. At the same time, the first sensor outputs a high-speed high-definition infrared camera with variable frame rate and resolution video streams. These two sets of video streams are respectively fed into two cascaded high-speed motion target trackers. In the cascaded tracker, the first level medium precision golf ball tracker filters the background and noise, calculates the pixel level accuracy of the golf ball position, and obtains the high-resolution filtered video stream. The subsequent second level high-precision golf ball tracker reads the high-precision golf ball tracker class video stream, completes sub-pixel level golf ball tracking here, outputs key-frames after noise reduction and background filtering, and sends them to the subsequent hitting parameter calculator. The hitting parameter calculator consists of three parts, Firstly, a golf ball 3D pose calculator is used to estimate the 3D pose of these two sets of key-frames (two in each group) from two cascaded trackers. The positions of a set of spherical feature points in the two key-frames are calculated, and then the velocity calculator and speed calculator respectively read these position values. The displacement and other changes of the ball in 3D space are compared, and the takeoff angle, takeoff direction, ball speed, and rotation axis are calculated, The instantaneous motion parameter values of golf balls, such as rotational speed, are collectively referred to as hitting parameter data. The hitting parameter data is sent to the output module. In the output module, the trajectory of the ball is first estimated by a parabolic calculator combined with air pressure parameters. The three-dimensional trajectory animation and hitting parameter text are then overlaid on a graphical display. The original hitting data is then assembled into a specific format message, which is sent out to external devices through communication lines such as Ethernet, USB, Bluetooth, etc. The real-time image data collected by two cameras, high-speed high-definition infrared camera and high-speed standard definition infrared camera, is parallel sent to the FPGA field programmable logic gate array. After preliminary processing and filtering, the image data is sent to the GPU/CPU graphics processor/central processing unit. After completing the main ball parameter calculation in the GPU/CPU graphics processor/central processing unit, The parameters of the golf ball are sent to the high-definition touch screen display for animation display and text stacking, packaged into specific format messages and sent to the serial interface, Ethernet card, Bluetooth transceiver unit for transmission to external devices for use by extension programs such as the course simulator.


The Working Principle of the Present Invention:

Our innovative solution is a two-step motion detection+tracking method, which resolves the problem into two steps: step one: real-time high-speed motion detection, step two, offline video tracking solution. Due to the use of a high-efficiency motion detection algorithm, it can be ensured that the device can detect the hitting event within 0.25 milliseconds after hitting the ball in step one, and continuously capture no less than 3 frames of video within the subsequent t3 moment. After caching the video, it become an offline video, and then implement offline video tracking and parameter calculation in step two. This eliminates the need for time-consuming target tracking of real-time video and ensures that the calculation of batting parameters is completed as soon as possible within 80 milliseconds.


Use multi-frame image method to solve the ball flight data in real time. Calculating the parameters of the ball's flying moment requires at least two synchronized images to calculated 3D position. The difficulty lies in the fact that the time interval between two frames cannot simultaneously meet the needs of solving high-speed and low-speed balls. Specifically, the first frame is a still picture at time to, so the time to capture the second frame is very critical. If the second frame is captured at time t3, refer to the frame at time t3 in FIG. 5. The low-speed ball in this frame of image sufficient displacement has occurred and a large enough angle has been turned. Then by comparing with the frame at time t0 in FIG. 4, the ball speed, launch angle and spin parameters can be calculated. For a medium-speed ball, select the t2 moment, and the ball's displacement and rotation are moderate, and the solution accuracy is the highest. However, for a high-speed ball, the t2 ms delay is too large, enough to cause the ball to fly out of the camera's visual range, causing the solution to fail, refer to the frame at time t2, in FIG. 4. Capture the second frame of image using a shorter t1 time. For a high-speed ball, the displacement and rotation of the ball are large enough to meet the calculation needs. Refer to the difference between the frame at time t1 and the frame at time to in FIG. 4. However, for a low-speed ball, the displacement and rotation of the golf ball in the image are not enough, so the solution will also fail. Refer to the comparison between the frame at time t1 and the frame at time t0 in FIG. 5.


Our solution is to select four of the images, that is, combining the two situations of FIG. 4 and FIG. 5, first filter out the t2 and t3 images of the high-speed ball (t2, t3 in FIG. 4); then filter out the t1 image of the medium-speed ball, t3 image (t1, t3 in FIG. 5); finally filter out the t1, t2 images of the low-speed ball (t1, t2 in FIG. 5). The specific method is to capture images at three time t1, t2, and t3 to obtain the second frame, the third frame, and the fourth frame respectively. First, compare the fourth frame with the first frame, if the fourth frame does not capture the image of the ball or the hitting parameters cannot be solved, the fourth frame (t3 in FIG. 4) should be ignored, and then compare the third frame with the first frame. For comparison, if the image of the ball is not captured in the third frame or the hitting parameters cannot be calculated, the third frame (t2 in FIG. 4) should be ignored, and the second frame should be compared with the first frame to calculate the hitting parameters. If the image of the ball is captured in the third or fourth frame, and the hitting parameters can be calculated, it means that this is a low-speed ball, and its second frame can be safely ignored (t1 in FIG. 5).


In addition, expanding the field of view of the acquisition sensor can solve the problem of the high-speed ball exceeding the boundary in FIG. 4. By using two frames of images, this problem can be solved. However, on the one hand, this requires higher requirements for the sensor, which will increase hardware costs. On the other hand, images with a larger field of view will bring more data, and at the same time, it will increase the computational load and device response time, causing performance loss.


When converting 2D images into 3D scene models, The golf hitting data analyzer uses dual cameras to obtain external image data I1, I2, and its calculation process involves the conversion of data from 2D to 3D. This can be achieved through the calibration of dual cameras. After calibration, by using the intrinsic parameter matrix and distortion parameters (k1, k2, k3, p1, p2), and spatial transformation (R, T), 2D data can be corrected and the impact of lens distortion can be reduced; The spatial transformation relationship can achieve the calculation of 2D data from dual cameras to 3D data after distortion removal.







P

w

1


=


K
1



{




u
1






v
1





1



}









P

w

2


=


K
2



{




u
2






v
2





1



}









P

w

1


=


R
*

P

w

2



+
T





A point in the world coordinate system is represented as (u1, v1) in the C1-camera coordinate system, and (u2, v2) as in the C2-camera coordinate system. If R, T are known, the two parameters, K1, K2 and their corresponding 3D positions Pw1, Pw2 in the camera coordinate system can be obtained.


After calibration, the device itself needs to establish relevant scene models, so that the device can know the relevant expressions of ground plane, positive direction, etc. Firstly, the calculation of the spatial equation of the ground plane requires at least three spatial points. When calibrating the ground plane, the calibration board needs to be placed near and away from the The golf hitting data analyzer device (approximately 30 cm and 50 cm). The 3D data of the outermost two points in the row closest to the ground below the calibration board in the image is taken, and a total of four 3D points {Pw11, Pw12, Pw13, Pw14}, are obtained from the two placements. These four points can be used to calculate the spatial plane Ax+By+Cz+D=0, The plane equation is the spatial expression of the ground plane. Next is the calibration of the positive direction vector. The calculation of the vector requires at least two spatial points. When calibrating the positive direction, the calibration board needs to be placed near the golf hitting data analyzer device (about 30 cm). Take the 3D data of the two outermost points in the row closest to the ground below the calibration board in the image. A total of two 3D points {P′w15, P′w15} are obtained in one placement. Calculate the projection points {P′w15, P′w16} of the two points on the ground plane to obtain the direction vector P′w16-P′w15, The vector equation is the positive direction vector.


When used, the detection of golf balls can be divided into two parts: rough object detection based on deep learning and precise extraction based on image processing. Deep learning object detection is trained using golf ball images collected by the golf hitting data analyzer devices for annotation. However, this algorithm can only roughly detect the target position and cannot accurately find the boundary and center position of the golf ball. Therefore, further precise extraction algorithms are needed. To optimize the effectiveness of object detection, the images obtained for object detection will be prioritized through image enhancement algorithms, preserving the boundaries of the golf ball while making it more prominent in the image.


The detection result of deep learning is a set of rectangular selection areas that may contain targets Rect(x, y, w, h), which are Rect(x−Δ, y−Δ, w+2*Δ, h+2*Δ) accurately extracted within the expanded rectangular selection area. The algorithm needs to automatically calculate the threshold to binarize the image, and find the golf ball boundary in the binary image image to calculate the center position. During the threshold calculation process, a histogram is used to statistically identify the peak value MaxLoc, and use the image center as the center of the circle and R=(w+h)/2+δ(δ<Δ) as the radius to calculate the average grayscale of the background part ave. At this point, the threshold calculation logic is as follows:

















if(MaxLoc < ave ∥ MaxLoc > 200)



   MaxLoc = ave



if(MaxLoc < 20) thred = 20



else thred = MaxLoc * 0.75










Due to the possible presence of logo markings, reflective spots, etc. on the surface of the golf ball, there may be some small black spots inside the binarized image of the golf ball. By expanding and corroding, these small black spots can be eliminated without affecting the boundaries of the binarized image. Subsequently, starting from the center of the image, search for boundaries in 36 directions around it, and the found boundary points are used for circular fitting through random sampling consistency. Considering that the threshold has a significant impact on image binarization, this algorithm cyclically changes the binarization threshold to find the optimal circular fitting. It is believed that the more fitting points that conform to the circular equation, the better the result.


When in use, due to hardware conditions, all captured images are not completely synchronized. Taking the trigger time as the benchmark, the capture times of the two cameras are {t0, t01, t02, t2}{t0, t11, t2}. At that time, after obtaining the center t0 position P2,t0 of the ball in the two camera images, P1,t0 the 3D coordinate position of the golf ball in Pw1,t0 the camera coordinate system can be directly calculated according to the project camera calibration relationship C1-camera. In order to calculate the flight data of the ball, it is necessary to calculate the 3D coordinate position at another time. At this time, the positions of the sphere center of C1-camera at t0 and t01 time are respectively P1,t0 and P1,t01, and the positions of the sphere center of C2-camera t0 and t11 time are respectively P2,t0 and P2,t11. P2,t0 and P2,t11 are converted into C2-camera coordinate system, and, {dot over (P)}2,t0 and {dot over (P)}2,t11, and are obtained by taking the parameters m1, m2 to obtain C2-camera three 3D points in the camera coordinate system: m1*{dot over (P)}2,t0, m1*{dot over (P)}2,t22 and m2*{dot over (P)}2,t11, these three points can constitute a space plane C2-camera equation based on P2,t0 and P2,t11 two points in the camera coordinate system. In the same way, C1-camera there are: m3*{dot over (P)}1,t01 and m4*{dot over (P)}1,t01 in the camera coordinate system m3*{dot over (P)}1,t0. Through Dual-camera calibration, the three 3D points in the in C2-camera coordinate system can be converted to C1-camera coordinate system {dot over (P)}′2,t0, {dot over (P)}′2,t11 and {dot over (P)}″2,t11. At this time, there are a total of 6 3D points in the C1-camera coordinate system, respectively from C1-camera image and C2-camera image, which can form two planes. There will be an intersection line between the two planes, which is the straight line where the flight trajectory of the ball is. The t01 coordinates of the center of the ball at the moment are: The corresponding 3D coordinates in the camera coordinate system Pw1,t01 can be obtained C1-camera.







Speed


V

=


(


P


w

1

,

t

01



-

P


w

1

,

t

0




)

/
T









Lanuch


angle

=

P


w

1

,

t

01




,


P


w

1

,

t

0





the


angle


between


the


vector


and


the


ground


plane








Azimuth


angle

=


P


w

1

,

t

01





angle


between


the


projection


on


the


ground


and


the


positive




directionP


w

1

,

t

0



.






When in use, The golf hitting data analyzer device has no special requirements for golf balls. The algorithm only uses the alternating light and dark texture of the surface as its grayscale image feature. In order to highlight its features, the gradient algorithm is used to enhance the surface features. In order to obtain more accurate rotation matching, the image features need to be spherically mapped, with the unit 1 as a pixel, and the size of the feature image after constraint normalized spherical mapping is 200×200; but close to The image features at the edge of the sphere are not obvious, and the presence of reflective points in the central area is not conducive to calculations. In the actual calculation, only images in the range of 20 to 180 are used and the central area is removed. The final feature image size is 160×160. In order to further improve the matching effect, the algorithm uses the position with a larger threshold after gradient transformation as the weight feature map FW. The highlighted position generally belongs to the spherical logo. In subsequent processing, the matching weight of this position will be increased.


When in use, the solution for rotation requires that the positions of the observer and the observation target be in a relatively stationary state. Otherwise, even if the target is stationary, the observed grayscale characteristics of the target surface will change due to the different observation directions. When calculating the rotation, the camera t0 time and t11 the two images at the same time are used. C2-camera Due to the flight of the golf ball, there are changes in the center position of the golf ball in the two images P2,t0, P2,t11 as well as in the x direction and the y direction. It is necessary to include the changes caused by the movement of the object. Compensate for parallax effects.







θ

x
,

t
=
0



=

a


tan

(


(


x

2
,

t

0



-

X

c

2



)

/

f


c

2

,
x



)









θ

y
,

t
=
0



=

a


tan

(


(


y

2
,

t

0



-

Y

c

2



)

/

f


c

2

,
y



)









θ

x
,

t

1

1



=

a


tan

(


(


x

2
,

t

11



-

X

c

2



)

/

f


c

2

,
x



)









θ

y
,

t

11



=

a


tan

(


(


y

2
,

t

11



-

Y

c

2



)

/

f


c

2

,
y



)









d


θ
x


=


θ

x
,

t

11



-

θ

x
,

t

0











d


θ
y


=


θ

y
,

t
=
11



-

θ

y
,

t

0








Among them, fc2,x and fc2,y represents the focal length of the camera calibrated by the C2-camera, Xc2 and Yc2 represents the optical center of the camera calibrated by the C2-camera. Through the above process, dθx and dθy are the angles that need to be compensated in the x direction and the y direction t11 respectively. After compensation, the grayscale feature image at the time removes the parallax effect and is converted to the time to grayscale feature image observed at the position of the ball at the time.


In 3D space, the rotation of the sphere can be divided into: back/forward rotation, side rotation, and roll. In the feature image mapped by the normalized spherical surface, side rotation is equivalent to the translational movement of the image, back/forward rotation is equivalent to the rotation of the image, and roll requires the establishment of a relevant remapping table. During image matching, t11 the grayscale feature image that has undergone translation, rotation, and remapping of the grayscale feature image at the time Ft11(a1, a2, a3) is matched with t0 the feature image at the time. F0 When matching, the two feature images calculate matching values based on the grayscale of pixels at the same location ρ.

















if (Gt11(x, y) > τ&&G0(x, y) > τ&&Gw(x, y) == 1 )









 ρ+= 1.05









else if (Gt11(x, y) > τ&&G0(x, y) > τ )









  ρ+= 1



if (G0(x, y) > τ)



  n+= 1










Among them, Gt11(x,y) represents Ft11(a1, a2, a3) the gray value G0(x,y) at (x,y) represents F0 the gray value Gw(x,y)at, (x,y) represents FW the gray value τat, (x, y) is the gray value threshold, and is nthe F0 number of all edge feature points with brighter gray values in the feature image that belong to the edge feature to be matched, then the final matching value of the two feature images is: ρ=ρ/n. ρThe larger the value, the better the matching. Finally, the ρspecific rotation angle θand rotation axis are calculated based on the transformation of custom-character feature image Ft11(a1, a2, a3) when the maximum value is obtained. When solving, take Ft11(0,0,0) multiple points on the original image and calculate their Ft11(abest, abest, abest) corresponding positions on the image after transformation. The calculated rotation angle θand rotation axis at this time custom-characterwhat the camera observed θy,C2 at time t=0. To convert them to the world coordinate system C2-camera, it is necessary to compensate for the vertical inclination of the camera to rotate the θy,t=0y,C2 rotation axis custom-character.


When in use, the calibration rod is used for zero position detection, and the purpose of the equipment's zero position detection is to facilitate users to adjust the zero position direction according to their own usage needs. The device provides a long rod with bright imaging under infrared conditions to achieve this function. After obtaining the grayscale image of the camera, the algorithm performs histogram statistics and accumulates the number of pixels from high to low. When the accumulated number of pixels exceeds a certain threshold, the current grayscale value of −30 is taken as the image binarization parameter. After binarization, count the number of highlighted pixels by row. The row with the highest number of highlighted pixels is most likely to belong to the calibration stick. Then, use the logic of sliding windows to search for highlighted pixels by window. By fitting the extracted highlighted pixel positions with a straight line, the calibration stick line equation in any camera image can be obtained. Due to the use of sliding window method, if the calibration rod shows a large slope in the image, the detection performance of the algorithm will be greatly reduced. Therefore, the calibration rod detection is performed by cyclically rotating the image within a certain range, and the optimal result is the one with the line fitting parameters closest to the horizontal. After compensating for the rotation parameters, the calibration rod line equation belonging to the original image is obtained. At this point, the straight line equations belonging to the calibration stick are obtained in the images of both cameras. After converting the straight line equations into the plane equations of their respective camera coordinate systems, the plane equations of either camera are converted to the coordinate system of the other camera. The intersection line of the two planes is the 3D straight line equation of the calibration stick. Take two points on a straight line as the ground plane projection, and calculate the angle between them and the positive direction, which is the deviation angle for placement.


When using, in order to accurately obtain the hitting motion of golf, the edge area of the golf ball in the direction of its hitting is used as the detection and judgment area after detecting the golf ball. In imaging, the background part has a lower grayscale value, while the golf ball part has a higher grayscale value. When the target moves, the pixels in this part of the image undergo significant changes, that is, the pixels originally belonging to the background part are replaced by the pixels on the ball, and the grayscale value of the pixels increases.


Fast motion target movement detection is achieved internally within the chip using frame difference method; After the system enters detection mode, select an appropriate reference frame and calculate the correlation coefficient of high-speed continuous input video frames in real time, as shown in the following formula:






N
=



(



(


(



"\[LeftBracketingBar]"



x
i

-

Xref
i




"\[RightBracketingBar]"


)

>
Threshold

)

?

1

:
0

)






where xi is the gray value of each pixel, Xrefi is the gray value of the reference frame, Threshold is the threshold, and N is the total number of statistical points; The programmable chip uses multi-level pipeline operations to improve computational efficiency, with each operation taking less than one frame cycle; The software and hardware interaction module will use corresponding protocols to set corresponding thresholds. When the real-time calculated parameter N is compared with the threshold and meets the requirements, it will process the current video frame in real time (real-time change input source or storage), and complete a fast detection function at this time. A high-speed ball requires a frame interval of 0.25 milliseconds, while a low-speed ball requires 3 to 5 frames, approximately 0.75 to 1.25 milliseconds. Our motion detection algorithm can always detect changes quickly and accurately.


The above shows and describes the basic principles, main features, and advantages of the present invention. Technicians in this industry should understand that the present invention is not limited by the aforementioned embodiment. The descriptions in the aforementioned embodiment and instructions only illustrate the principles of the present invention. Without departing from the spirit and scope of the present invention, there may be various changes and improvements in the present invention, all of which fall within the scope of the claimed protection. The scope of protection claimed by the present invention is defined by the accompanying claims and their equivalents.

Claims
  • 1. A golf hitting data analyzer, characterized in that it includes a sensor group, a static golf ball detector, an ultra high speed motion detector, a cascaded high-speed motion target tracker, a hitting parameter calculator, and an output. The sensor group includes a first sensor and a second sensor, and the cascaded high-speed motion target tracker includes a medium precision golf ball tracker and a high-precision golf ball tracker, The hitting parameter calculator includes a golf ball 3D posture calculator, a speed calculator, and a spin calculator. The output includes a parabolic calculator, a graphics display, and Ethernet, USB, Bluetooth, etc.
  • 2. A golf hitting data analyzer according to claim 1, characterized in that the second sensor reads a standard resolution video stream, which is fed into a static golf ball detector for target detection.
  • 3. A golf hitting data analyzer according to claim 1, characterized in that the first sensor outputs a high-speed high-definition infrared camera and a variable resolution video stream.
  • 4. A golf hitting data analyzer according to claim 1, characterized in that: the medium precision golf ball tracker filters the background and noise, calculates the pixel level accuracy of the golf ball position, and obtains a high-resolution filtered video stream.
  • 5. A golf hitting data analyzer according to claim 1, characterized in that: the golf ball three-dimensional posture calculator estimates the three-dimensional posture of these two sets of keyframes from two cascaded trackers, and the speed calculator and speed calculator read the positions of a set of spherical feature points in the two keyframes.
  • 6. A golf hitting data analyzer according to claim 1, characterized in that: the parabolic calculator, combined with air pressure parameters, continues to estimate the flight trajectory line of the ball, and is handed over to a graphical display to complete the drawing of a three-dimensional trajectory line animation and the stacking of hitting parameter text.
  • 7. A golf hitting data analyzer according to claim 1, characterized in that real-time image data collected by two cameras, a high-speed high-definition infrared camera and a high-speed standard definition infrared camera, are parallel sent to an FPGA field programmable logic gate array and output the results, hitting data packets, through peripherals such as Ethernet, USB, Bluetooth, etc.