DIAGNOSING METHOD OF GOLF SWING

Abstract
A diagnosing method of a golf swing according to the present invention includes the following steps: (x) a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club to obtain a plurality of frames for determining a shaft position;(y) a calculating part subjecting a check frame for judging the shaft position and the other frame to difference processing and binary processing using the plurality of frames, to obtain a binarized difference image; and(z) the calculating part subjecting the difference image or a corrected image thereof to Hough transform processing to attempt to extract the shaft position.
Description

The present application claims priority on Patent Application No. 2011-266491 filed in JAPAN on Dec. 6, 2011, the entire contents of which are hereby incorporated by reference.


BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates to a diagnosing method of quality of a golf swing.


2. Description of the Related Art


When a golf player hits a golf ball, the golf player addresses so that a line connecting right and left tiptoes is approximately parallel to a hitting direction. In a right-handed golf player's address, a left foot is located on a front side in the hitting direction, and a right foot is located on a back side in the hitting direction. In the address, a head of a golf club is located near the golf ball. The golf player starts a takeback from this state, and raises up the head backward and then upward. A position where the head is fully raised up is a top. A downswing is started from the top. A start point of the downswing is referred to as a quick turn. The head is swung down after the quick turn, and the head collides with the golf ball (impact). After the impact, the golf player swings through the golf club forward and then upward (follow-through), and reaches a finish.


In improvement in skill of a golf player, it is important to acquire a suitable swing form. Swing diagnosis is conducted so as to contribute to the improvement in the skill. In the swing diagnosis, a swing is photographed by a video camera. The swing may be photographed in order to collect materials useful for development of golf equipment.


In classic swing diagnosis, a teaching pro or the like views a moving image and points out problems during a swing. On the other hand, an attempt to diagnose the swing using image processing is also conducted. In the image processing, a frame required for diagnose needs to be extracted from a large number of frames. An example of the extracting method is disclosed in Japanese Patent Application Laid-Open No. 2005-210666 (US2005/0143183). In the method, extraction is conducted by difference processing. An excellent image is required for accurate swing diagnosis. An example of a camera shake correction method when a swing is photographed is disclosed in Japanese Patent Application Laid-Open No. 2011-78066.


SUMMARY OF THE INVENTION

A golf club in which a mark is attached to a shaft is used in the method disclosed in Japanese Patent Application Laid-Open No. 2005-210666. The golf club needs to be preliminarily prepared. The method is suitable for diagnosis conducted based on photographing at a golf equipment shop. However, the method is unsuitable for diagnosis when a common golf club is swung in a golf course or a driving range.


It is an object of the present invention to provide a method capable of readily diagnosing quality of a swing.


A diagnosing method of a golf swing according to the present invention includes the following steps:


(x) a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club to obtain a plurality of frames for determining a shaft position;


(y) a calculating part subjecting a check frame for judging the shaft position and the other frame to difference processing and binary processing using the plurality of frames, to obtain a binarized difference image; and


(z) the calculating part subjecting the difference image or a corrected image of the difference image to Hough transform processing to attempt to extract the shaft position.


Preferably, the method further includes the following step (Sa) and/or step (Sb):


(Sa) the step of judging quality of a posture of the golf player according to whether the shaft position of the check frame is extracted in the step of attempting to extract the shaft position; and


(Sb) the step of judging the quality of the posture of the golf player according to the shaft position when the shaft position of the check frame is extracted in the step of attempting to extract the shaft position.


Preferably, the check frame is a frame of a top. Preferably, the check frame is a frame of a finish.


The present invention according to another aspect includes the following steps:


(1) a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club to obtain a plurality of frames for determining a shaft position;


(2) a calculating part extracting a plurality of predetermined frames from the plurality of frames;


(3) the calculating part performing difference processing and binary processing using the plurality of predetermined frames to obtain a plurality of binarized difference images;


(4) the calculating part subjecting the plurality of difference images to AND processing to obtain an AND image for extracting the shaft position;


(5) the calculating part determining a shaft searching region using the AND image; and


(6) the calculating part subjecting the plurality of AND images or corrected images of the AND images to Hough transform processing to attempt to time-sequentially extract the shaft position.


Preferably, the method includes the following steps of:


(7) the calculating part subjecting the difference image and/or the AND image to contraction processing and expansion processing to obtain a mask image; and


(8) the calculating part subjecting the AND image and the mask image to difference processing to obtain a masked difference image.


Preferably, the corrected image is the masked difference image.


Preferably, an extraction result of a shaft position of a check frame and an extraction result of a shaft position before the check frame are obtained as a result of the time-sequential extraction. Preferably, advisability of extraction of the shaft position in the check frame is judged according to whether the shaft position before the check frame is extracted when the shaft position of the check frame cannot be extracted. Preferably, the check frame is a frame of a top or a finish.


A diagnosing system of a golf swing according to the present invention includes:


(A) a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club;


(B) a memory storing photographed image data; and


(C) a calculating part.


The calculating part includes:


(C1) a function for extracting a plurality of predetermined frames from the image data;


(C2) a function for performing difference processing and binary processing using the plurality of predetermined frames to obtain a plurality of binarized difference images;


(C3) a function for subjecting the plurality of difference images to AND processing to obtain an AND image for extracting a shaft position;


(C4) a function for determining a shaft searching region using the AND image; and


(C5) a function for subjecting the plurality of AND images or corrected images of the AND images to the Hough transform processing to attempt to time-sequentially extract the shaft position.


The method according to the present invention can readily diagnose the quality of the golf swing.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a conceptual view showing a swing diagnosing system according to one embodiment of the present invention;



FIG. 2 is a flow chart showing a diagnosing method of a golf swing conducted by the system of FIG. 1;



FIG. 3 is an illustration showing a screen of a camera of FIG. 1;



FIG. 4 is a flow chart showing a determining method of a check frame;



FIG. 5 is a flow chart showing a method determining a frame of an address;



FIG. 6 is an illustration for a Sobel method;



FIG. 7 is a binarized image;



FIG. 8 is a flow chart showing a method determining a frame of an impact;



FIG. 9 is an image showing a result of a difference between a 44th frame and a reference frame;



FIG. 10 is an image showing a result of a difference between a 62th frame and a reference frame;



FIG. 11 is an image showing a result of a difference between a 75th frame and a reference frame;



FIG. 12 is an image showing a result of a difference between a 76th frame and a reference frame;



FIG. 13 is an image showing a result of a difference between a 77th frame and a reference frame;



FIG. 14 is an image showing a result of a difference between a 78th frame and a reference frame;



FIG. 15 is a graph showing a difference value;



FIG. 16 is a flow chart showing a method determining a frame of a top;



FIG. 17 is a graph showing a difference value;



FIG. 18 is a flow chart showing a method determining a frame of a predetermined position of a takeback;



FIG. 19 is an image showing a result of a difference between a 30th frame and a reference frame;



FIG. 20 is an image showing a result of a difference between a 39th frame and a reference frame;



FIG. 21 is an image showing a result of a difference between a 41th frame and a reference frame;



FIG. 22 is an image showing a result of a difference between a 43th frame and a reference frame;



FIG. 23 is an image showing a result of a difference between a 52th frame and a reference frame;



FIG. 24 is an image showing a result of a difference between a 57th frame and a reference frame;



FIG. 25 is a flow chart showing an example of a method for extracting a shaft of a top;



FIG. 26 shows images generated in a process of the method of FIG. 25;



FIG. 27 shows a shaft searching range in an edge image of an address image;



FIG. 28 shows a mask image 1 showing a hand position in an address;



FIG. 29 shows a mask image 1 obtained by adding a position of a head to FIG. 28;



FIG. 30 shows a mask image 1 obtained by adding a hand position near a top to FIG. 29;



FIG. 31 shows an AND image showing an example of the shaft searching region; and



FIG. 32 shows an AND image showing another example of the shaft searching region.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

Hereinafter, the present invention will be described below in detail based on preferred embodiments with reference to the drawings.


A system 2 shown in FIG. 1 is provided with a mobile telephone 4 and a server 6. The mobile telephone 4 and the server 6 are connected each other via a communication line 8. The mobile telephone 4 is provided with a camera 10, a memory 12, and a transmitting/receiving part 14. Specific examples of the memory 12 include a RAM, an SD card (including a mini SD and a micro SD or the like), and other memory medium. The server 6 is provided with a calculating part 16, a memory 18, and a transmitting/receiving part 20. The calculating part 16 is typically a CPU.


A flow chart of diagnosing method of a golf swing conducted by the system 2 of FIG. 1 is shown in FIG. 2. In the diagnosing method, photographing is conducted by the camera 10 (STEP1). A screen before photographing is started is shown in FIG. 3. The screen is displayed on a monitor (not shown) of the mobile telephone 4. An address of a golf player 24 having a golf club 22 is photographed on the screen. On the screen, the golf player 24 is photographed from a back side. A first frame 26 and a second frame 28 are shown on the screen. These frames 26 and 28 are displayed by software executed on a CPU (not shown) of the mobile telephone 4. The frames 26 and 28 contribute to a case where a photographer determines an angle of the camera 10. The photographer determines an angle of the camera 10 so that the first frame 26 includes a grip 30 and the second frame 28 includes a head 32. Furthermore, the frames 26 and 28 contribute to determination of a distance between the camera 10 and the golf player 24.


Photographing is started from the state shown in FIG. 3. After the photographing is started, the golf player 24 starts a swing. The photographing is continued until a golf ball (not shown) is hit and the swing is ended. Moving image data is obtained by the photographing. The data includes a large number of frames. These frames are stored in the memory 12 (STEP2). The number of pixels of each of the frames is, for example, 640×480. Each of the pixels has RGB system color information.


The photographer or the golf player 24 operates the mobile telephone 4 to transmit the moving image data to the server 6 (STEP3). The data is transmitted to the transmitting/receiving part 20 of the server 6 from the transmitting/receiving part 14 of the mobile telephone 4. The transmission is conducted via the communication line 8. The data is stored in the memory 18 of the server 6 (STEP4).


The calculating part 16 conducts camera shake correction (STEP5). As described in detail later, the diagnosing method according to the present invention conducts difference processing between the frames. The camera shake correction enhances accuracy in the difference processing. An example of a method for the camera shake correction is disclosed in Japanese Patent Application Laid-Open No. 2011-78066. When the mobile telephone 4 has a sufficient camera shake correction function, the camera shake correction conducted by the calculating part 16 can be omitted.


The calculating part 16 determines a frame presented in order to decide quality of a swing from a large number of frames (STEP6). Hereinafter, the frame is referred to as a check frame. For example, frames corresponding to the following items (1) to (6) are extracted:


(1) an address


(2) a predetermined position during a takeback


(3) a top


(4) a quick turn


(5) an impact


(6) a finish


The predetermined position during the takeback includes a position where an arm is horizontal. The quick turn implies a state immediately after start of the downswing. In the quick turn, the arm is substantially horizontal. The details of an extracting step (STEP6) of the check frame will be described later.


The calculating part 16 determines an outline of each of the check frames (STEP7). Specifically, the calculating part 16 determines an outline of a body of the golf player 24 or an outline of the golf club 22. The calculating part 16 decides the quality(right or wrong) of the swing based on the outline (STEP8).


The deciding result is transmitted to the transmitting/receiving part 14 of the mobile telephone 4 from the transmitting/receiving part 20 of the server 6 (STEP9). The deciding result is displayed on the monitor of the mobile telephone 4 (STEP10). The golf player 24 viewing the monitor can know a portion of the swing which should be corrected. The system 2 can contribute to improvement in skill of the golf player 24.


As described above, the calculating part 16 determines the check frame (STEP6). The calculating part 16 has the following functions:


(1) a function for obtaining an edge image of a frame extracted from the image data;


(2) a function for subjecting the edge image to binarization based on a predetermined threshold value to obtain a binary image;


(3) a function for subjecting the binary image to Hough transform processing to extract a position of a shaft 34 of the golf club 22, and specifying a tip coordinate of the golf club 22;


(4) a function for contrasting tip coordinates of different frames to determine a temporary flame in the address;


(5) a function for calculating color information in the reference area of each of frames by backward sending from a frame after the temporary frame by a predetermined number, and determining a frame in the address based on change of the color information;


(6) a function for using a frame after the frame in the address by a predetermined number as a reference frame, calculating a difference value between each of frames after the reference frame and the reference frame, and determining a frame of an impact based on change of the difference value;


(7) a function for calculating a difference value between each of a plurality of frames before the frame of the impact and a frame just before the frame, and determining a frame of a top based on the difference value;


(8) a function for calculating a difference value between each of a plurality of frames after the frame of the impact and a frame just before the frame, and determining a frame of a finish based on the difference value;


(9) a function for calculating a difference value between each of a plurality of frames after the frame of the address and the frame of the address;


(10) a function for subjecting the difference images of each of the frames to Hough transform processing to extract the position of the shaft 34;


(11) a function for determining a frame of a predetermined position during a takeback based on change of the position of the shaft 34;


(12) a function for extracting a plurality of predetermined frames from the plurality of image data;


(13) a function for performing difference processing and binary processing using the plurality of predetermined frames to obtain a plurality of binarized difference images;


(14) a function for subjecting the plurality of difference images to AND processing to obtain an AND image;


(15) a function for determining a shaft searching region using the AND image;


(16) a function for subjecting the plurality of AND images to the Hough transform processing to attempt to time-sequentially extract a shaft position;


(17) a function for subjecting a plurality of corrected images obtained by correcting the plurality of AND images to the Hough transform processing to attempt to time-sequentially extract the shaft position;


(18) a labeling function for leaving a region having an area of a predetermined number or greater of pixels and eliminating a region having an area of less than the predetermined number of pixels;


(19) a function for performing contraction processing;


(20) a function for performing expansion processing;


(21) a function for subjecting the difference image to the contraction processing and the expansion processing to obtain a mask image;


(22) a function for subjecting the AND image and the mask image to difference processing to obtain a masked difference image;


(23) a function for subjecting the masked difference image to the Hough transform processing to attempt to extract the shaft position;


(24) a function for subjecting a check frame and one or more frames before the check frame to the Hough transform processing to obtain an extraction result of the shaft position for a plurality of frames; and


(25) a function for judging the shaft position and/or the quality of a swing in the check frame based on the plurality of extraction results in the item (24).


In the present application, the term “attempt to extract a shaft” is used to consider that it may be difficult to extract the shaft in the check frame such as the top.


A flow chart of a determining method of the check frame is shown in FIG. 4. The determining method includes a step of determining the frame of the address (STEP61), a step of determining the frame of the impact (STEP62), a step of determining the frame of the top (STEP63), a step of determining the frame of the predetermined position of the takeback (STEP64), and a step of determining the frame of the finish (STEP65). The predetermined position of the takeback is, for example, a position where the arm is horizontal. The step of determining the frame of the finish (STEP65) may be omitted.


The step of determining the frame of the finish (STEP65) can determine a frame after the frame of the impact by a predetermined number as the frame of the finish, for example. The step of determining the frame of the finish (STEP65) may be the same way as the step of determining the frame of the top (STEP63).


Other check frame may be determined based on the frame determined by the method shown in FIG. 4. For example, a frame before the frame of the impact by a predetermined number can be defined as a frame of a quick turn.


A flow chart of a method for determining the frame of the address is shown in FIG. 5. In the method, each of the frames is converted into a grayscale image from an RGB image (STEP611). The conversion is conducted in order to facilitate subsequent edge detection. A value V in the grayscale image is calculated by, for example, the following numerical expression.






V=0.30·R+0.59·G+0.11·B


The edge is detected from the grayscale image and the edge image is obtained (STEP612). In the edge, change of a value V is great. Therefore, the edge can be detected by differentiating or taking differences of the change of the value V. A noise is preferably removed in the calculation of the differentiation or the difference. A Sobel method is exemplified as an example of the method for detecting the edge. The edge may be detected by other method. A Prewitt method is exemplified as the other method.



FIG. 6 is an illustration for the Sobel method. Characters A to I in FIG. 6 represent values V of the pixels. A value E′ is calculated from a value E by the Sobel method. The value E′ is edge intensity. The value E′ is obtained by the following numerical expression.






E′=(fx2+fy2)1/2


In the numerical expression, fx and fy are obtained by the following numerical expression.






f
x
=C+2·F+I−(A+D+G)






f
y
=G+2·H+I−(A+B+C)


Each of the pixels of the edge image is binarized (STEP613). A threshold value for binarization is suitably determined according to the weather and the time or the like. A monochrome image is obtained by the binarization. An example of the monochrome image is shown in FIG. 7.


Data of the monochrome image is presented for Hough transform (STEP614). The Hough transform is a method for extracting a line from an image using regularity of a geometric shape. A straight line, a circle, and an ellipse or the like can be extracted by the Hough transform. In the embodiment, a straight line corresponding to the shaft 34 of the golf club 22 is extracted by the Hough transform.


The straight line can be represented by an angle θ between a line perpendicular to the straight line and an x-axis, and a distance ρ between the straight line and a origin point. The angle θ is a clockwise angle having a center on the origin point (0, 0). The origin point is on the upper left. The straight line on an x-y plane corresponds to a point on a θ-ρ plane. Meanwhile, a point (xi, yi) on the x-y plane is converted into a sine curve represented by the following numerical expression on the θ-ρ plane.





ρ=xi·cos θ+yi·sin θ


When points which are on the same straight line on the x-y plane are converted into the θ-ρ plane, all sine curves cross at one point. When a point through which a large number of sine curves pass in the θ-ρ plane becomes clear, the straight line on the x-y plane corresponding to the point becomes clear.


Extraction of a straight line corresponding to the shaft 34 is attempted by the Hough transform. In a frame in which the shaft 34 is horizontal in the takeback, an axis direction of the shaft 34 approximately coincides with an optical axis of the camera 10. In the frame, the straight line corresponding to the shaft 34 cannot be extracted. In the embodiment, ρ is not specified; θ is specified as 30 degrees or greater and 60 degrees or less; x is specified as 200 or greater and 480 or less; and y is specified as 250 or greater and 530 or less. Thereby, the extraction of the straight line is attempted. Since θ is specified as the range, a straight line corresponding to an erected pole is not extracted. A straight line corresponding to an object placed on the ground and extending in a horizontal direction is also not extracted. False recognition of a straight line which does not correspond to the shaft 34 as the straight line corresponding to the shaft 34 is prevented by specifying θ as 30 degrees or greater and 60 degrees or less. In the embodiment, in straight lines in which the number of votes (the number of pixels through which one straight line passes) is equal to or greater than 150, a straight line having the greatest number of votes is regarded as the straight line corresponding to the shaft 34. In the frame in which the straight line corresponding to the shaft 34 is extracted by the Hough transform, the tip coordinate of the shaft 34 (the tip position of the straight line) is obtained (STEP615).


In the embodiment, the tip coordinate is obtained by backward sending from a 50th frame after the photographing is started. A frame in which the moving distance of the tip between the frame and a following frame of the frame is equal to or less than a predetermined value is determined as a temporary frame of the address (STEP616). In the embodiment, a f-th frame in which a tip is in the second frame 28 (see FIG. 3) and the summation of the moving distances of (f−1)th to (f+2)th tips is equal to or less than 40 is defined as a temporary frame.


SAD (color information) of a plurality of frames before and after the temporary frame is calculated (STEP617). SAD is calculated by the following numerical expression (F1).






SAD=(RSAD+GSAD+BSAD)/3   (F1)


In the numerical expression (F1), RSAD is calculated by the following numerical expression (F2); GSAD is calculated by the following numerical expression (F3); and BSAD is calculated by the following numerical expression (F4).






RSAD=(Rf1−Rf2)2   (F2)






GSAD=(Gf1−Gf2)2   (F3)






BSAD=(Bf1−Bf2)2   (F4)


In the numerical expression (F2), Rf1 represents an R value in the f-th second frame 28; Rf2 represents an R value in the (f+1)-th second frame 28. In the numerical expression (F3), Gf1 represents a G value in the f-th second frame 28; and Gf2 represents a G value in the (f+1)-th second frame 28. In the numerical expression (F4), Bf1 represents a B value in the f-th second frame 28; and Bf2 represents a B value in the (f+1)-th second frame 28.


SAD of each of the frames is calculated by backward sending from a frame after the temporary frame by a predetermined number. In the embodiment, SAD of from a frame after the temporary frame by 7 to a frame before the temporary frame by 10 is calculated. A frame in which SAD is first less than 50 is determined as a true frame of the address (STEP618). The frame is the check frame. The outline of the check frame is determined (STEP7), and the quality of the swing is decided (STEP8). When the frame in which SAD is less than 50 does not exist, a frame in which SAD is the minimum is determined as the true frame of the address.


A flow chart of a method for determining the frame of the impact is shown in FIG. 8. Since the frame of the address has been already determined, the frame after the frame of the address by the predetermined number is determined as a reference frame (STEP621). The reference frame is a frame before the impact in which the golf club 22 is not reflected in the second frame 28. In the embodiment, a frame after the frame of the address by 25 is defined as the reference frame.


Difference processing is conducted between the reference frame and each of the frames after the reference frame (STEP622). The difference processing is processing known as one of image processings. Difference images are shown in FIGS. 9 to 14. The details of the images are as follows.



FIG. 9: A difference image between a 44th frame and the reference frame



FIG. 10: A difference image between a 62th frame and the reference frame



FIG. 11: A difference image between a 75th frame and the reference frame



FIG. 12: A difference image between a 76th frame and the reference frame



FIG. 13: A difference image between a 77th frame and the reference frame



FIG. 14: A difference image between a 78th frame and the reference frame


A difference value in the second frame 28 for the image after the difference processing is calculated (STEP623). The difference value is shown in a graph of FIG. 15. The graph shows that the difference value of the 77th frame is the largest. The 77th frame is determined as the frame of the impact (STEP624). The frame is an example of the check frame. The outline of the check frame is determined (STEP7), and the quality of the swing is decided (STEP8).


A flow chart of a method for determining the frame of the top is shown in FIG. 16. The frame of the impact has been already determined. Difference processing of from the frame of the impact to a frame before the impact by a predetermined number is conducted (STEP631). The difference processing is conducted between the frame and a frame after the frame by 1. A difference value is obtained by the difference processing. The difference value is shown in FIG. 17. In the embodiment, a frame in which a difference value is the minimum is selected between a frame before the impact by 15 and the frame of the impact (STEP632). In the example of FIG. 17, the 77th frame is the frame of the impact; and a 65th frame is the frame of the top. The 65th frame is the check frame. The outline of the check frame is determined (STEP7), and the quality of the swing is decided (STEP8).


A method for determining the frame of the finish may be the same as the method for determining the frame of the top. The frame of the impact is already determined. A frame after the impact by a predetermined number to the last frame are subjected to difference processing. The difference processing is conducted between the frame and a frame after the frame by 1. A difference value is obtained by the difference processing. For example, a frame in which a difference value is the minimum is selected for the frame of the finish between a frame after the impact by a predetermined number and the last frame. The frame of the finish is the check frame. The outline of the check frame is determined (STEP7), and the quality of the swing is decided (STEP8).


A flow chart of a method for determining the predetermined position of the takeback is shown in FIG. 18. The frame of the address has been already determined. The difference processing of frames after the frame of the address is conducted (STEP641). The frame of the address is used as the reference frame, and the difference processing is conducted between the reference frame and other frame. Difference images are shown in FIGS. 19 to 24. The details of the images are as follows.



FIG. 19: A difference image between a 30th frame and the reference frame



FIG. 20: A difference image between a 39th frame and the reference frame



FIG. 21: A difference image between a 41th frame and the reference frame



FIG. 22: A difference image between a 43th frame and the reference frame



FIG. 23: A difference image between a 52th frame and the reference frame



FIG. 24: A difference image between a 57th frame and the reference frame


In these difference images, the number of pixels of a longitudinal y is 640, and the number of pixels of a transversal x is 480. These difference images are subjected to Hough transform (STEP642). A straight line corresponding to the shaft 34 can be calculated by the Hough transform. In each of difference screens, the existence or nonexistence of the straight line satisfying the following conditions is decided (STEP643).


θ: 5 degrees or greater and 85 degrees or less


ρ: no specification


x: 0 or greater and 240 or less


y: 0 or greater and 320 or less


number of votes: equal to or greater than 100


In the frame from which the straight line satisfying these conditions is extracted, the shaft 34 is located on a left side than a waist of the golf player 24. A frame (hereinafter, referred to as a “matching frame”) after the frame of the address, from which the straight line satisfying these conditions is extracted first, is the check frame. A frame after the matching frame by a predetermined number may be determined as the check frame. In a frame after the matching frame by 2, it has been clear experientially that a left arm of the right-handed golf player 24 is almost horizontal. The outline of the check frame is determined (STEP7), and the quality of the swing is decided (STEP8).


Hereinafter, the extracting method of the shaft position of the top will be described. FIG. 25 is a flow chart showing an example of the extracting method. FIG. 26 shows an image obtained in each stage of the extracting method. A portion other than the shaft can be removed from an object image in the extracting method. The shaft position can be extracted with a high degree of accuracy by the removal. The object image in the present application means an image from which the shaft position is extracted. The object image is subjected to the Hough transform processing to extract the shaft position. A preferred object image is an AND image to be described later or a corrected image of the AND image. An example of the corrected image is a difference image Dp-D to be described later. The corrected image is obtained by correcting the AND image with the mask image, for example.


In the frame of the top, the shaft may not be viewed. For example, the shaft may not be viewed when the shaft faces in a target direction in the top. In this case, the shaft position in the top is not extracted. However, in the embodiment, even when the shaft position in the top is not extracted the quality of the swing can be judged.


An image T in which difference processing is started is determined in the extracting method (STEP1100). As described above, the frame of the top is already determined. The image T is a frame before the frame of the top by a predetermined number. Preferably, the predetermined number is 3 or greater and 10 or less, and more preferably 5. Preferably, the image T is an image immediately before the top. In the embodiment, the image T is a frame before the frame of the top by 5.


Even when the shaft is not viewed in the frame of the top, the shaft is viewed in the frame before the top. The image T in which difference processing is started is preferably determined in a frame which is close to the top and in which the shaft tends to be viewed.


Next, a difference A is performed (STEP1110). The difference A is difference processing of the image T and the address image. A difference image Dp-A obtained by the difference A is shown in FIG. 26.


Next, a difference B is performed (STEP1120). The difference B is difference processing of the image T and a frame after the image T by a predetermined number. In the embodiment, the predetermined number is set to 2. The predetermined number is preferably 1 or greater and 3 or less, and more preferably 2. A difference image Dp-B obtained by the difference B is shown in FIG. 26.


Next, a difference C is performed (STEP1130). The difference C is difference processing of the image T and a frame before the image T by a predetermined number. In the embodiment, the predetermined number is set to 2. The predetermined number is preferably 1 or greater and 3 or less, and more preferably 2. A difference image Dp-C obtained by the difference C is shown in FIG. 26.


Thus, in the difference B and the difference C, a difference between a frame before and after the image T and close to the image T and the image T is performed.


Next, AND processing is performed (STEP1140). The AND processing is performed between the difference image Dp-B and the difference image Dp-C. Only a pixel existing in both the difference image Dp-B and the difference image Dp-C is left by the AND processing. An AND image An-1 obtained by the AND processing is shown in FIG. 26. In the AND image An-1, a portion other than the shaft in the image T is effectively removed.


The AND processing is an example of correction processing of the difference image. The AND image An-1 is an example of the corrected image.


The AND image An-1 may be the object image. However, in the embodiment, the following steps are performed in order to further improve the accuracy of extraction of the shaft position.


Next, the mask image 1 is generated (STEP1150). In the step, the difference image Dp-A is subjected to processing. In the generation step of the mask image 1, the difference image Dp-A is subjected to contraction processing, labeling processing, and expansion processing. First, the difference image Dp-A is subjected to the contraction processing to remove dot noise or the like. Preferably, the difference image Dp-A is subjected to the contraction processing a plurality of times. In the embodiment, the difference image Dp-A is subjected to the contraction processing three times. Next, the difference image Dp-A is subjected to the labeling processing. In the labeling processing, a region having an area of a predetermined number or greater of pixels is left, and a region having an area of a predetermined number or less of pixels is removed. In the embodiment, the predetermined number of pixels in the labeling processing is 150. Next, the difference image Dp-A is subjected to the expansion processing. The size of the image is returned to a state before the contraction processing by the expansion processing. Preferably, the difference image Dp-A is subjected to the expansion processing a plurality of times. In the embodiment, the difference image Dp-A is subjected to the expansion processing four times. The obtained mask image 1 is represented by symbol Mp-1 in FIG. 26.


The mask image 1 (image Mp-1) is used in order to remove an image of a portion other than the shaft. As shown in FIG. 26, in the mask image 1 (image Mp-1), the pixel of the shaft portion is removed, and the pixel of a portion of the golf player is mainly left.


Next, a mask image 2 is generated (STEP1160). In the step, the AND image An-1 is subjected to processing. In the generation step of the mask image 2, the AND image An-1 is subjected to the contraction processing, the labeling processing, and the expansion processing. First, the AND image An-1 is subjected to the contraction processing to remove dot noise or the like. Preferably, the AND image An-1 is subjected to the contraction processing a plurality of times. In the embodiment, the AND image An-1 is subjected to the contraction processing three times. Next, the AND image An-1 is subjected to the labeling processing. In the labeling processing, a region having an area of a predetermined number or greater of pixels is left, and a region having an area of less than the predetermined number of pixels is removed. In the embodiment, the predetermined number of pixels in the labeling processing is 15. Next, the AND image An-1 is subjected to the expansion processing. The size of the image is returned to a state before the contraction processing by the expansion processing. Preferably, the AND image An-1 is subjected to the expansion processing a plurality of times. In the embodiment, the AND image An-1 is subjected to the expansion processing three times. The mask image 2 is represented by symbol Mp-2 in FIG. 26.


The mask image 2 (image Mp-2) is used in order to remove an image of a portion (head) other than the shaft. As shown in FIG. 26, in the mask image 2 (image Mp-2), the pixel of the shaft portion is removed, and the pixel of the head in the image T is mainly left. The mask image 2 is useful to remove the portion (head) other than the shaft.


Next, a difference D is performed (STEP1170). The difference D is difference processing in which the mask image 1 and the mask image 2 are removed from the AND image An-1. A difference image Dp-D obtained by the difference D is shown in FIG. 26. The difference D is performed using the mask images 1 and 2, and thereby the portion other than the shaft are effectively removed from the AND image An-1. The difference image Dp-D is an example of the masked difference image. The difference image Dp-D is an example of the corrected image.


Next, the shaft searching region is determined (STEP1180). The accuracy of extraction of the shaft position can be improved by limiting the shaft searching region.


The shaft searching region is set to a region where the shaft is more likely to exist. When the shaft searching region is too narrow, the shaft is more likely to be separated from the region. On the other hand, when the shaft searching region is too large, the extraction accuracy of the shaft may be reduced. A suitable shaft searching region is set in consideration of these points.


Preferably, a feature point is extracted using any of the images obtained in the steps to determine the shaft searching region. The shaft searching region is determined based on the feature point. An example of extraction of the feature point will be described later.


Next, the Hough transform processing is executed (STEP1190). The Hough transform processing is executed in the determined shaft searching region. The shaft is extracted by the Hough transform processing. However, the shaft is not extracted by the Hough transform processing when the shaft is not viewed.


Next, it is judged whether the image T is the top or not (STEP1200). When the image T is the top, the extraction of the shaft is ended. When the image T is not the top, the frame after the image T by 1 is selected (STEP1210), and the processing returns to the difference step. The loop is repeated until the image T is the top. The time-sequential extraction of the shaft position is trialed by the loop. Because an image before the top by five frames is the original image T in the embodiment, the loop is repeated six times in the embodiment. Thus, in the embodiment, the extraction of the shaft position is trialed for the plurality of frames from the frame before the top to the frame of the top. Therefore, the extraction result of the shaft position is time-sequentially obtained for each of the plurality of frames.


The shaft position of the top is judged based on the extraction result in the plurality of frames. The extraction result is classified into the following results A to C.


[Result A]: The shaft position of the top is extracted.


[Result B]: Although the shaft position of the top is not extracted, the shaft position is extracted for at least one frame before the top.


[Result C]: The shaft position is not extracted for all the frames in which the extraction of the shaft position is attempted.


In the case of the result A, the quality (the quality of the posture in the top) of the swing is judged based on the shaft position in the extracted top.


In the case of the result B, the extraction of the shaft position before the top (near the top) is achieved. Therefore, it is not in a situation where the extraction of the shaft position near the top is failed. When the result B is obtained, the result that the shaft position of the top is not extracted possesses higher reliability. That is, in the case of the result B, in the top, the shaft can be judged to be substantially parallel to the target direction.


In the top, a state where the shaft is parallel to the target direction is an excellent shaft position. When the result B is obtained, the posture of a golf player 24 in the top is judged to be excellent.


In the case of the result C, as is the case with the result B, the shaft position in the top cannot be extracted. However, furthermore, in the result C, the shaft position before the top (near the top) cannot be also extracted. The result C shows that the extraction of the shaft position near the top is failed. That is, the result C shows a situation where the extraction of the shaft is failed from any cause in spite of a situation where the shaft is viewed near the top. This is because it is hard to consider a situation where the shaft is not viewed before the top. Therefore, when the result C is obtained, the shaft position of the top is judged to be unclear. In this case, the quality of the posture of the golf player 24 in the top is not judged.


The judgement associated with these extraction results includes the following step (Sa) or (Sb):


(Sa) the step of judging the quality of the posture of the golf player according to whether the shaft position of the check frame is extracted or not in the step of attempting to extract the shaft position; and


(Sb) the step of judging the quality of the posture of the golf player according to the shaft position when the shaft position of the check frame is extracted in the step of attempting to extract the shaft position.


Preferably, the step (Sa) includes the following step (Sa1).


(Sa1) When the shaft position of the check frame cannot be extracted, the advisability of the extraction of the shaft position in the check frame is judged according to whether the shaft position before the check frame is extracted or not.


In the step (Sa1), when the shaft position before the check frame is extracted, the extraction of the shaft position in the check frame is judged to be suitable. In the step (Sa1), when the shaft position before the check frame is not extracted, the extraction of the shaft position in the check frame is judged to be unsuitable.


Preferably, the step (Sa) includes the following step (Sa2).


(Sa2) the step of judging that the shaft position (the posture of the golf player) is excellent when the shaft position of the check frame is not extracted in the step of attempting to extracting the shaft position and the shaft position is extracted in the frame before the check frame.


In the steps (Sa1) and (Sa2), the result of the frame before the check frame is preferably the extraction result of the other frame other than the check frame in the extraction results of the plurality of frames obtained as the time-sequential extraction result. In the embodiment, the check frame is the frame of the top.


The result B and the result C cannot be distinguished by merely extracting the shaft position of the check frame (the frame of the top). Therefore, in the case where the extraction of the shaft position of the top is merely execute, the shaft position can be judged to be excellent even if the extraction of the shaft position near the top is failed. In the embodiment, the reliability of the extraction result of the shaft in the top is improved by tracking the shaft position from the frame before the top.


Hereinafter, the details of the difference steps (STEP 1110, 1120, and 1130) are exemplarily shown. The difference processing in the difference step is difference processing from an original image (color image). Therefore, the binary processing is performed after difference processing in order to obtain the binarized difference image in the difference steps. On the other hand, in the present application, the difference processing includes also the difference processing of the binarized images. In the difference processing, the binary processing after the difference processing is unnecessary.


In an example of the difference steps, hue H, chroma S, and brightness V are calculated for each pixel of the two frames which is a subject for the difference. The formulae for the calculation are shown in the following items (1) to (4).









H
=

{



0



R
=

G
=
B







90
/
3.6





d
=
0

,

G
>
B







270
/
3.6





d
=
0

,

G
<
B








degree


(


arctan


(


3



(


G
-
B



2

R

-
G
-
B


)


)


+
π

)


/
3.6






2

R

-
G
-
B

<
0







degree


(


(


(
arctan
)



3



(


G
-
B



2

R

-
G
-
B


)


)

+

2

π


)



mod





2


π
/
3.6







2

R

-
G
-
B

>
0









(
1
)






S
=





(

B
-
R

)

2

+


(

R
-
G

)

2

+


(

G
-
B

)

2


2

×
100





(
2
)






V
=



R
+
G
+
B

3

×
100





(
3
)







L
=




0.298912

R

+

0.586611

G

+

0.114478

B


3

×
100









0

R

,
G
,

B

1






(
4
)







As shown in the formula (1), the hue H is determined by five classifications. R, G, and B are values of color information of RGB system. d=2R-G-B is set. “d” stands for “denominator”. “L” is luminance.


Conversion from the RGB system to a HSV system is known. Known conversion formulae other than the above formulae maybe used.


Next, a color distance between pixels is calculated based on the calculated hue H, chroma S, and brightness V. In one differentiated frame, the hue is defined as H1; the chroma is defined as S1; and the brightness is defined as V1. In the other differentiated frame, the hue is defined as H2; the chroma is defined as S2; and the brightness is defined as V2. A color vector in which the hue is H1; the chroma is S1; and the brightness is V1 is defined as C1. A color vector in which the hue is H2; the chroma is S2; and the brightness is V2 is defined as C2. A color distance D (C1, C2) is calculated by the following formulae (5) to (16). The color distance D (C1, C2) is 0 or greater and 10.0 or less. In the formula (7), a color distance ΔH between C1 and C2 in a Hue-Saturation space is obtained. A logarithm of the formulae (15) and (16) is taken in order to correctly calculate the color distance even when S1 and S2 are low a, b, and c in the formula (5) are constants. In the embodiment, a is set to 5.1; b is set to 2.25, and c is set to 2.65.










D


(


C
1

,

C
2


)


=


a





Δ






H



+

b





Δ






S



+

c





Δ






V








(
5
)







Δ






H



=




Δ





H

4.0


Δ






S



=




Δ





S

2.0


Δ






V



=


Δ





V

2.0







(
6
)







Δ





H

=




(


X
1

-

X
2


)

2

+


(


Y
1

-

Y
2


)

2







(
7
)







Δ





S

=





S
1

/
100

-


S
2

/
100








(
8
)







Δ





V

=





V
1

/
100

-


V
2

/
100








(
9
)







X
1

=


S
avg




cos


(


H
1

×
3.6

)







(
10
)







Y
1

=


S
avg




sin


(


H
1

×
3.6

)







(
11
)







X
2

=


S
avg




cos


(


H
2

×
3.6

)







(
12
)







Y
2

=


S
avg




sin


(


H
2

×
3.6

)







(
13
)







S
avg


=



S
1


+

S
2



2





(
14
)







S
1


=


log
10



(




S
1

/
100

×
99

+
1.0

)






(
15
)







S
2


=


log
10



(




S
2

/
100

×
99

+
1.0

)






(
16
)







The calculation of the color distance D (C1, C2) is an example of the difference processing. The binary processing is performed based on the color distance D (C1, C2). In the binary processing, a pixel in which the color distance D (C1, C2) is equal to or greater than a predetermined value is set to 1, and a pixel in which the color distance D (C1, C2) is less than a predetermined value is set to 0. In the embodiment, a pixel in which the color distance D (C1, C2) is equal to or greater than 0.4 is set to 1, and a pixel in which the color distance D (C1, C2) is less than 0.4 is set to 0. The binarized difference image is obtained by the binary processing.


Next, an example of the determination step (STEP1180) of the shaft searching region will be described. The determination step includes the step of extracting the feature point. The shaft searching region is determined based on the feature point. The feature point suitable for determining a suitable shaft searching region is selected.


In the embodiment, a central point HCl (not shown) of a head image is extracted as a first feature point. The central point HCl is a central point of the head image photographed in the mask image 2 (image Mp-2; see FIG. 26). In the head image, a maximum value X1 of x, a minimum value X2 of x, a maximum value Y1 of y, and a minimum value Y2 of y are determined. When x of the central point HCl is defined as Xc, and y of the central point HCl is defined as Yc, Xc and Yc are calculated by the following formulae:






Xc=(X1+X2)/2






Yc=(Y1+Y2)/2


In the embodiment, a hand position in the address is extracted as a second feature point. The step of extracting the hand position in the address includes the step St1 of extracting the shaft in the address and the step St2 of extracting the hand position in the address based on the result of the step St1.


An edge image EAp of the address is used in the step St1 (see FIG. 27). The description of the edge image is omitted in FIG. 27. Preferably, a predetermined shaft searching range is determined in step St1. The shaft searching range is determined in consideration of the position of the shaft in the address. In the embodiment, in the shaft searching range, x is 240 to 480, and y is 320 to 640 (see FIG. 27). In the step St1, the shaft searching range is subjected to Hough transform processing. The shaft position is extracted by the Hough transform processing.



FIG. 28 describes the step St2. The mask image 1 (image Mp-1) is used in the step St2. In the step St2, a straight line Ls along the shaft position extracted in the step St1 is used. A hand position AG1 in the address is determined based on the intersection of the straight line Ls and the image Mp-1 (see FIG. 28). x of the hand position AG1 is defined as Xa. y of the hand position AG1 is defined as Ya.


In the embodiment, the position of the head in the top (or near the top) is extracted as a third feature point. The mask image 1 (image Mp-1) is used to extract the position of the head. A rectangle shown in FIG. 29 is a searching range to extract the position of the head. In the embodiment, x of the searching range is set to [Xa-40] to [480×0.8]. y of the searching range is set to 20 to Ya. In the image existing in the searching range, a point in which y is the minimum (the uppermost side in FIG. 29) is defined as a position TH1 of the head (see FIG. 29). x of the position TH1 of the head is defined as Xh. y of the position TH1 of the head is defined as Yh.


In the embodiment, the position of the hand in the top (or near the top) is extracted as a fourth feature point. The mask image 1 (image Mp-1) is used to extract the position of the hand. A rectangle shown in FIG. 30 is a searching range to extract the position of the hand. In the embodiment, x of the searching range is set to 40 to [Xh-40]. y of the searching range is set to 20 to Ya. In the image existing in the searching range, a point in which y is the minimum (the uppermost side in FIG. 30) is defined as a position TG1 of the hand (see FIG. 30). x of the position TG1 of the hand is defined as Xt. y of the position TG1 of the hand is defined as Yt.


The position TG1 may be appropriately corrected. In the embodiment, when y of the hand position to be finally determined is defined as Yf, Yf is defined as [Yt+10]. When x of the hand position to be finally determined is defined as Xf, Xf is defined as Xt (no correction).


The following feature points are determined by the steps:


the central point HCl (Xc, Yc) of the head image; and


the hand position (Xf, Yf) in the top (or near the top).


The shaft searching region which is the last object of the step (STEP1180) is determined based on these feature points.


An example of the determined shaft searching region is shown by a quadrangle in FIG. 31. x of the region is [Xf-20] to [Xc+20]. y of the region is [Yc-20] to [Yf+20]. Thus, the shaft searching region is determined. The determined shaft searching region is subjected to the Hough transform processing (STEP1190). In the Hough transform processing of k, a straight line in which the number of votes is equal to or greater than a predetermined number is extracted as the shaft. The number of votes is preferably 3 or greater and 10 or less, and more preferably 6. It is preferable that the restriction condition of 0 is not set.



FIG. 32 shows another AND image An-1. As in the embodiment of FIG. 32, the head may not be viewed in the AND image. In this case, the image of the head does not exist also in the mask image 2 (image Mp-2). Therefore, the coordinate (Xc, Yc) of the central point HCl cannot be determined. In this case, the shaft searching region can be determined by using the other feature point. A region shown by a quadrangle in FIG. 32 is an example of the determined shaft searching region. An x-coordinate Xb of a waist position and a y-coordinate Yb of the waist position are used to determine the region.


Furthermore, the maximum x-coordinate Xm in the AND image An-1 is used to determine the region. x of the right end of the image in FIG. 32 is Xm.


An example of the determining method of Xb and Yb is as follows. In the AND image An-1, a point having an extremal value in a line (back line) of a back side (back or the like) of a body can be defined as a waist position B1 (Xb, Yb). An example of the determined waist position B1 is shown in FIG. 32.


The shaft searching region is determined using the waist position B1 (Xb, Yb). In the embodiment shown in FIG. 32, x of the shaft searching region is Xb to Xm, and y of the shaft searching region is Yf to Yz. However, Yz is calculated by the following formula.






Yz=((Yb−Yf)/3)+Yf


Thus, the shaft searching region in the top (or near the top) can be determined based on the plurality of feature points. The accuracy of the shaft extraction can be improved by suitably setting the shaft searching region.


The shaft position of the finish can be extracted by a method similar to the method for extracting the shaft position of the top. Even in the finish, the shaft may not be viewed as in the top. Therefore, the method can be also effectively applied to extract the shaft position of the finish.


The shaft is preferably tracked from a frame after the impact by a predetermined number to extract the shaft position of the finish. The frame in which the tracking is started is a frame between the impact and the finish. The predetermined number is preferably 5 or greater and 15 or less, and more preferably 10. Even when the shaft position in the finish is not extracted by the tracking, suitable judgement can be performed.


Hereinafter, the outline of the extracting method of the shaft position of the finish will be described. Because the outline of the extracting method has a resemblance to the case of the top (see FIG. 25), hereinafter, only a difference point between the case of the finish and the case of the top will be described with reference to FIG. 25.


In the extracting method, the image T in which the difference processing is started is determined (see the STEP1100). As described above, the frame of the impact is already determined. The image T is a frame after the frame of the impact by a predetermined number. In the embodiment, the image T is a frame after the frame of the impact by 10.


Even when the shaft is not viewed in the frame of the finish, the shaft is viewed in the frame before the finish. The image T in which the difference processing is started is preferably determined to a frame which is close to the finish and in which the shaft tends to be viewed.


In the embodiment (the extracting method of the shaft position of the top) of FIG. 25, the image T in which the difference processing is started is a frame between the address and the top. On the other hand, in the extracting method of the shaft position of the finish, the image T in which the difference processing is started is a frame between the impact and the finish.


Next, the difference A is performed (see the STEP1110). The difference A is difference processing of the image T and the address image. Difference image DA is obtained by the difference A.


Next, the difference B is performed (see the STEP1120). The difference B is difference processing of the image T and a frame after the image T by a predetermined number. Difference image DB is obtained by the difference B. The predetermined number is preferably 1 or greater and 3 or less, and more preferably 2.


Next, the difference C is performed (see the STEP1130). The difference C is difference processing of the image T and a frame before the image T by a predetermined number. Difference image DC is obtained by the difference C. The predetermined number is preferably 1 or greater and 3 or less, and more preferably 2.


Thus, in the difference B and the difference C, a difference between a frame before the image T which is near the image T and the image T and a difference between a frame after the image T which is near the image T and the image T are performed.


Next, the AND processing is performed (see the STEP1140). The AND processing is performed between the difference image DB and the difference image DC. Only a pixel existing in both the difference images is left by the AND processing. An AND image A1 is generated by the AND processing. In the AND image A1, a portion other than the shaft in the image T is effectively removed.


The AND image A1 may be the object image. However, the following steps are preferably performed from the viewpoint of further improving the accuracy of the extraction of the shaft position.


Next, the mask image 1 is generated (see the STEP1150). In the step, the difference image DA is subjected to processing. In the generation step of the mask image 1, the difference image DA is subjected to contraction processing, labeling processing, and expansion processing. First, the difference image DA is subjected to the contraction processing to remove dot noise or the like. Preferably, the difference image DA is subjected to the contraction processing a plurality of times. In the embodiment, the difference image DA is subjected to the contraction processing three times. Next, the difference image DA is subjected to the labeling processing. In the labeling processing, a region having an area of a predetermined number or greater of pixels is left, and a region having an area of less than the predetermined number of pixels is removed. In the embodiment, the predetermined number of pixels in the labeling processing is 150. Next, the difference image DA is subjected to the expansion processing. The size of the image is returned to a state before the contraction processing by the expansion processing. Preferably, the difference image DA is subjected to the expansion processing a plurality of times. In the embodiment, the difference image DA is subjected to the expansion processing three times.


The mask image 1 is used in order to remove an image of a portion other than the shaft. Although not shown in the drawings, in the mask image 1, the pixel of the shaft portion is removed, and the pixel of a portion of the golf player is mainly left.


Next, the mask image 2 is generated (see the STEP1160). In the step, the AND image A1 is subjected to processing. In the generation step of the mask image 2, the AND image A1 is subjected to contraction processing, labeling processing, and expansion processing. First, the AND image A1 is subjected to the contraction processing to remove dot noise or the like. Preferably, the AND image A1 is subjected to the contraction processing a plurality of times. In the embodiment, the AND image A1 is subjected to the contraction processing three times. Next, the AND image A1 is subjected to the labeling processing. In the labeling processing, a region having an area of a predetermined number or greater of pixels is left, and a region having an area of less than the predetermined number of pixels is removed. In the embodiment, the predetermined number of pixels in the labeling processing is 15. Next, the AND image A1 is subjected to the expansion processing. The size of the image is returned to a state before The contraction processing by the expansion processing. Preferably, the AND image A1 is subjected to the expansion processing a plurality of times. In the embodiment, the AND image A1 is subjected to the expansion processing three times.


The mask image 2 is used in order to remove an image of a portion (head) other than the shaft. Although not shown in the drawings, in the mask image 2, the pixel of the shaft portion is removed, and the pixel of the head in the image T is mainly left. The mask image 2 is useful to remove a portion (head) other than the shaft.


Next, the difference D is performed (see the STEP1170). The difference D is difference processing in which the mask image 1 and the mask image 2 are removed from the AND image Al. The portion other than the shaft is effectively removed from the AND image A1 by performing the difference D using the mask images 1 and 2.


Next, the shaft searching region is determined (see the STEP1180). The accuracy of the extraction of the shaft position can be improved by limiting the shaft searching region.


The shaft searching region is set to a region where the shaft is more likely to exist. When the shaft searching region is too narrow, the shaft is more likely to be separated from the region. On the other hand, when the shaft searching region is too large, the extraction accuracy of the shaft may be reduced. A suitable shaft searching region is set in consideration of these points.


Preferably, the feature point is extracted using any of the images obtained in the steps to determine the shaft searching region. The shaft searching region is determined based on the feature point.


Next, the Hough transform processing is executed (see the STEP1190). The Hough transform processing is executed in the determined shaft searching region. The shaft is extracted by the Hough transform processing. However, when the shaft is not viewed, the shaft is not extracted by the Hough transform processing.


The number of votes is equal to or greater than a predetermined number to extract the shaft in the Hough transform processing. The number of votes is preferably 5 or greater and 15 or less, and more preferably 10. Preferably, the restriction condition of θ is not set.


In the Hough transform processing, the number of votes may be less than the predetermined number (for example, 10). In this case, the AND image A1 may be subjected to the Hough transform processing in place of the difference image D. That is, in this case, the AND image A1 may be the object image. When a straight line in which the number of votes is equal to or greater than a predetermined number (for example, 10) is extracted by the Hough transform processing, the straight line is extracted as the shaft. When the AND image A1 is the object image, the restriction condition of θ is preferably added. A preferable restriction condition of θ is 0 degree or greater and −90 degrees or less.


Next, it is judged whether the image T is the finish or not (see the STEP1200). When the image T is the finish, the extraction of the shaft is ended. When the image T is not the finish, the frame after the image T by 1 is selected (see the STEP1210), and the processing returns to the difference step. The loop is repeated until the image T is the finish. In the embodiment, therefore, the extraction of the shaft position is attempted for a plurality of frames from the frame before the finish to the frame of the finish. Therefore, the extraction result of the shaft position is obtained for each of the plurality of frames.


The shaft position of the finish is judged based on the extraction result in the plurality of frames. The extraction result is classified into the following results A to C.

  • [Result A]: The shaft position of the finish is extracted.
  • [Result B]: Although the shaft position of the finish is not extracted, the shaft position is extracted for at least one frame before the finish.
  • [Result C]: The shaft position is not extracted for all frames in which the extraction of the shaft position is attempted.


In the case of the result A, the quality (the quality of the posture in the finish) of the swing is judged based on the shaft position in the extracted finish.


In the case of the result B, the extraction of the shaft position before the finish (near the finish) is achieved. Therefore, it is not in a situation where the extraction of the shaft position near the finish is failed. When the result B is obtained, the result that the shaft position of the finish is not extracted possesses higher reliability. That is, in the case of the result B, in the finish, the shaft can be judged to be substantially parallel to the target direction. Alternatively, the case where the shaft is hidden with the body in the finish is also assumed.


In the case of the result C, as in the result B, the shaft position in the finish cannot be extracted. However, furthermore, in the result C, the shaft position before the finish (near the finish) cannot be also extracted. The result C shows that the extraction of the shaft position near the finish is failed. That is, the result C shows a situation where the extraction of the shaft is failed from any cause in spite of a situation where the shaft is viewed near the finish. This is because it is hard to consider a situation where the shaft is not viewed before the finish (from the impact to the finish). Therefore, when the result C is obtained, the shaft position of the finish is judged to be unclear. In this case, the quality of the posture of the golf player 24 in the finish is not judged.


The result B and the result C cannot be distinguished by merely extracting the shaft position of the finish. Therefore, in the case where the extraction of the shaft position of the finish is merely execute, the shaft position can be judged to be excellent even if the extraction of the shaft position near the finish is failed. In the embodiment, the reliability of the extraction result of the shaft in the finish is improved by tracking the shaft position from the frame before the finish.


The check frame is not limited to the top and the finish. The determination of the check frame enables swing diagnosis at various positions. For example, the quality of the swing may be decided by an angle between the straight line corresponding to the shaft 34 in the address and the straight line corresponding to the shaft 34 in the downswing.


Although the calculating part 16 of the server 6 conducts each of processings in the embodiment, the calculating part 16 of the mobile telephone 4 may conduct each of the processings. In the case, the connection of the mobile telephone 4 and the server 6 is unnecessary.


The method according to the present invention can diagnose the swing performed in a golf course, a practice range, a golf shop, and a garden of a general household or the like.


The description hereinabove is merely for an illustrative example, and various modifications can be made in the scope not to depart from the principles of the present invention.

Claims
  • 1. A diagnosing method of a golf swing comprising the steps of: a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club to obtain a plurality of frames for determining a shaft position;a calculating part subjecting a check frame for judging the shaft position and the other frame to difference processing and binary processing using the plurality of frames, to obtain a binarized difference image; andthe calculating part subjecting the difference image or a corrected image of the difference image to Hough transform processing to attempt to extract the shaft position.
  • 2. The method according to claim 1, further comprising the following step (Sa) and/or step (Sb): (Sa) the step of judging quality of a posture of the golf player according to whether the shaft position of the check frame is extracted or not in the step of attempting to extract the shaft position; and(Sb) the step of judging the quality of the posture of the golf player according to the shaft position when the shaft position of the check frame is extracted in the step of attempting to extract the shaft position.
  • 3. The method according to claim 1, wherein the check frame is a frame of a top.
  • 4. The method according to claim 1, wherein the check frame is a frame of a finish.
  • 5. A diagnosing method of a golf swing comprising the steps of: a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club to obtain a plurality of frames for determining a shaft position;a calculating part extracting a plurality of predetermined frames from the plurality of frames;the calculating part performing difference processing and binary processing using the plurality of predetermined frames to obtain a plurality of binarized difference images;the calculating part subjecting the plurality of difference images to AND processing to obtain an AND image for extracting the shaft position;the calculating part determining a shaft searching region using the AND image; andthe calculating part subjecting the plurality of AND images or corrected images of the AND images to Hough transform processing to attempt to time-sequentially extract the shaft position.
  • 6. The method according to claim 5, further comprising the steps of: the calculating part subjecting the difference image and/or the AND image to contraction processing and expansion processing to obtain a mask image; andthe calculating part subjecting the AND image and the mask image to difference processing to obtain a masked difference image,wherein the corrected image is the masked difference image.
  • 7. The method according to claim 5, wherein an extraction result of a shaft position of a check frame and an extraction result of a shaft position before the check frame are obtained as a result of the time-sequential extraction; and advisability of extraction of the shaft position in the check frame is judged according to whether the shaft position before the check frame is extracted or not when the shaft position of the check frame cannot be extracted.
  • 8. The method according to claim 7, wherein the check frame is a frame of a top or a frame of a finish.
  • 9. A diagnosing system of a golf swing comprising: (A) a camera photographing a golf player swinging a golf club to hit a golf ball and the golf club;(B) a memory storing photographed image data; and(C) a calculating part, wherein the calculating part comprising:(C1) a function for extracting a plurality of predetermined frames from the image data;(C2) a function for performing difference processing and binary processing using the plurality of predetermined frames to obtain a plurality of binarized difference images;(C3) a function for subjecting the plurality of difference images to AND processing to obtain an AND image for extracting a shaft position;(C4) a function for determining a shaft searching region using the AND image; and(C5) a function for subjecting the plurality of AND images or corrected images of the AND images to Hough transform processing to attempt to time-sequentially extract the shaft position.
Priority Claims (1)
Number Date Country Kind
2011-266491 Dec 2011 JP national