1. Field of the Invention
The present invention relates to a technology for analyzing a motion of a user. Priority is claimed on Japanese Patent Application No. 2012-279501, filed on Dec. 21, 2012, the content of which is incorporated herein by reference.
2. Description of Related Art
In the related art, various technologies have been proposed in order to analyze a motion of a user. For example, Japanese Unexamined Patent Application, First Publication No. H06-39070 discloses a technology which displays a moving image of a swing motion of the user concurrently with a moving image of a pre-recorded reference use swing motion (for example, a swing motion of a professional golfer) on an identical screen.
According to the technology disclosed in Japanese Unexamined Patent Application, First Publication No. H06-39070, the user analyzes their own swing motion by visually comparing their own swing motion with a reference use swing motion. However, in practice, it is difficult to understand the difference between both motions by accurately and precisely comparing the motion of the user with the reference-purpose motion while visually checking the moving images on the screen. In view of the above-described circumstances, the present invention aims to enable the user to easily understand the difference between the motion of the user and the reference use motion.
In order to solve the above-described problem, a motion analysis device of the present invention includes observation data acquisition means for acquiring observation data which indicates a trajectory of a target observation point moving in conjunction with a motion of a user; comparison means for comparing reference data which indicates a predetermined trajectory of the target observation point with the observation data which is acquired and generated by the observation data acquisition means; and audio control means used to generate an audio signal according to a comparison result from the comparison means.
According to the present invention, the audio signal can be generated according to the comparison result between the observation data and the reference data. Therefore, the user can easily understand a difference between the trajectory of the target observation point and the predetermined trajectory indicated by the reference data.
As illustrated in
More specifically, as illustrated in
The acceleration sensor 20 in
As illustrated in
The arithmetic processing unit 12 allows a plurality of functions (observation data acquisition unit 32, comparison unit 34 and audio control unit 36) for analyzing the motion of the user U by implementing a program stored in the storage device 14. It is also possible to distribute each function of the arithmetic processing unit 12 to a plurality of devices.
The observation data acquisition unit 32 sequentially acquires observation data Db indicating a trajectory (hereinafter, referred to as an “observation trajectory”) Oa of the target observation point P corresponding to the swing motion of the user U. More specifically, the observation data acquisition unit 32 sequentially generates the observation data Db supplied by the acceleration sensor 20 in the time-series manner. As illustrated in
The storage device 14 in
The reference trajectory Oref is a standard of an observation trajectory Oa specified by each observation data Db. For example, a trajectory of the target observation point P when an action performer such as a professional golfer skilled in the swing motion performs standard swing motion is preferably employed as the reference trajectory Oref. More specifically, when the standard actor performs the swing motion, time series of the plurality of observation data Db generated by the observation data acquisition unit 32 are stored in the storage device 14 in advance as the reference data series Sref (each reference data Dref). Therefore, the reference value Rx of each reference data Dref corresponds to a change amount of the acceleration Ax when performing the standard swing motion, the reference value Ry corresponds to a change amount of the acceleration Ay and the reference value Rz corresponds to a change amount of the acceleration Az.
The comparison unit 34 in
As illustrated in
The comparison unit 34 of the present embodiment compares the observation data Db with the reference data Dref in a predetermined analysis section within a section from the start to the completion of the swing motion performed by the user U. The analysis section is a section from a point of time when the user U starts downswing motion (action of swinging the club C down) after takeaway action in backswing (hereinafter, an “action start point”) until predetermined time duration T elapses.
The time duration T of the analysis section is set according to time duration from the action start point of the User U until the user U completes follow-through action (finishing action in swinging the club C). The time duration from an actual action start point until the swing is completed varies depending on a swing speed of the user U. In the present embodiment, an average swing speed of the user U is calculated based on results on the time series of the observation data Db which are previously measured multiple times. The time duration T of the analysis section corresponding to the average swing speed is selected for each user U and is stored in the storage device 14.
The comparison unit 34 detects the action start point by utilizing each observation data Db (S1). Considering that a change amount in the acceleration of the target observation point P has a tendency to increase immediately after the start of the downswing motion, the comparison unit 34 of the first embodiment detects the action start point according to a change amount ΔA of the acceleration indicated by each observation data Db.
More specifically, the comparison unit 34 sequentially determines whether or not the change amount ΔA in the acceleration indicated by the observation data Db which is sequentially supplied from the observation data acquisition unit 32 is beyond a predetermined threshold value ATH. For example, the change amount ΔA is the sum of an absolute value of the observation value Bx, an absolute value of the observation value By and an absolute value of the observation value Bz. The comparison unit 34 repeats Step S1 until the change amount ΔA is beyond the predetermined threshold value ATH (S1: NO), and detects a point in time when the change amount ΔA is beyond the predetermined threshold value ATH as the action start point (S1: YES). Then, the process proceeds to Step S2.
The comparison unit 34 stretches (expands or contracts) the reference data series Sref on a time axis according to the time duration T of the analysis section which is selected and stored in advance based on the average swing speed of the user U (S2). More specifically, time duration Tref from foremost reference data Dref of the reference data series Sref to backmost reference data Dref (time duration from a start point to an end point of the reference trajectory Oref) is adjusted to be the time duration T.
More specifically, when the time duration T is longer than the time duration Tref, the comparison unit 34 increases the amount of reference data Dref by performing an interpolation process on the reference data series Sref, thereby equalize the amount of the reference data Dref with the amount of observation data Db. For the interpolation process of the reference data series Sref, a known technology (for example, a linear interpolation process or a spline interpolation process) may be optionally employed.
On the other hand, when the time duration T is shorter than the time duration Tref, the comparison unit 34 decreases the amount of reference data Dref by performing a thinning process on the reference data series Sref, thereby equalizing the amount of reference data Dref with the amount of observation data DB. For the thinning process of the reference data series Sref, a known technology may be optionally employed. The reference trajectory Oref itself does not vary in the process of Step S2.
The comparison unit 34 compares each observation data Db which is sequentially generated by the observation data acquisition unit 32 with each of the reference data Dref of the reference data series Sref after adjustment in Step S2 (S3).
More specifically, the comparison unit 34 reads out each reference data Dref of the reference data series Sref after the adjustment, starting from the foremost, in chronological order each time the observation data acquisition unit 32 generates the observation data Db, and sequentially generates the comparison data Dc by setting a difference between the reference data Dref and the observation data Db to be the comparison data Dc. As illustrated in
When it is determined that the time duration T elapses from the action start point (S4: YES), the comparison unit 34 completes the comparison process. As will be appreciated from the above description, the comparison data Dc indicating a difference between the observation trajectory Oa and the reference trajectory Oref is sequentially generated within the analysis section while the swing motion is performed.
The audio control unit 36 in
More specifically, the audio control unit 36 changes a pitch in each sample of the audio data W according to the comparison data Dc. For example, the comparison unit 34 converts the audio data W so that a change in the pitch in each sample is larger as each comparison value (ΔTx, ΔTy and ΔTz) of the comparison data Dc is greater (that is, as a difference between the observation trajectory Oa and the reference trajectory Oref is larger). Each sample of the audio data W is sequentially (on a real-time basis) converted and output concurrently with the swing motion of the user U. That is, within the analysis section of the swing motion performed by the user U, the pitch in a reproduced sound varies moment by moment according to the difference between the observation trajectory Oa and the reference trajectory Oref.
Here, a known method is used in an adjustment process of the pitch using the modulation of each sample of the audio data W. As an example, a pitch adjustment method of adjusting a reading-out speed of the audio data W will be described below.
The sampled audio data W which is waveform data having predetermined time duration is configured to have a plurality of frames. Each frame is adapted to correspond to one section within a plurality of sections configuring the associated analysis section. In this case, if the user U performs the swing motion, the comparison unit 34 generates the comparison data Dc corresponding to each section. Based on the corresponding comparison data Dc, the audio control unit 36 determines the reading-out speed from the storage device 14 of the audio data W for the frame corresponding to each section, and reads out the audio data W of the corresponding frame at the determined reading speed, from the storage device 14. Here, when the reading speed is determined to be faster than a standard reading speed according to a value of the comparison data Dc, a sound of the corresponding frame is reproduced so as to have the pitch higher than a reference pitch. In this case, reading-out of the entire frame is completed until the time duration of the corresponding frame elapses. However, at a point of time when reading-out of the entire frame is completed at the fast speed, a reading process re-starts from the foremost sample of the corresponding frame. Until the time duration of the corresponding frame elapses, the reading process is continuously repeated. In contrast, when the reading speed slower than the standard reading speed is determined, reading of the entire frame is not completed until the time duration of the corresponding frame elapses. However, a method can be considered in which reading of the corresponding frame is stopped at a point of time when reaching the end of the time duration of the corresponding frame and the process proceeds to a new reading process for the sample of the subsequent frame. In a case of the fast reading speed as well as in a case of the slow reading speed, the waveform can be discontinuous in a connecting portion between frames. However, it can be considered that smooth waveform connection between the frames is achieved by using a known cross-fade process.
The above-described pitch adjustment method is also called a cut and splice method, and is disclosed in the related art of U.S. Pat. No. 5,952,596, for example.
In
As illustrated in
The pitch Pa1 in
On the other hand, the pitch Pa2 in
According to the above-described configuration, by checking a change in the pitch Pa in the reproduced sound, the user U can intuitively understand how the difference between the observation trajectory Oa and the reference trajectory Oref is changed at each point in time (with the lapse of time).
As described above, in the first embodiment, the audio signal S is generated according to the comparison result between the observation data Db and the reference data Dref. Therefore, the user U can easily understand the difference between the observation trajectory Oa of the target observation point P and the reference trajectory Oref indicated by the reference data Dref.
In addition, the audio signal S is generated with respect to the swing motion on a real time basis. Therefore, as compared to a configuration where a sound is reproduced after the swing motion is performed, the user U can instinctively understand a relationship of the actual swing motion and the difference between the observation trajectory Oa and the reference trajectory Oref.
That is, the comparison unit 34 sequentially compares the observation data with reference data concurrently with the swing motion of the user U, and the audio control unit 36 generates the audio signal concurrently with each comparison performed by the comparison unit 34. In the above-described configuration, an audio signal is generated with respect to action of a user on a real time basis. Therefore, as compared to a configuration where the audio signal is generated after the action to be analyzed is performed, the user can instinctively understand actual action and a difference between a trajectory of a target observation point and a predetermined trajectory.
In a configuration of fixing the time duration of the reference data series Sref, when the time duration of the swing motion is different from the time duration of the reference data series Sref, even though the observation trajectory Oa itself approximates to the reference trajectory Oref, it can be evaluated that observation trajectory Oa is different from the reference trajectory Oref. In the present embodiment, the reference data series Sref are stretched on the time axis according to an average swing speed of the user U. Therefore, it is possible to appropriately evaluate the difference between the observation trajectory of the swing motion of the user U and the reference trajectory Oref.
That is, the comparison unit 34 stretches time series of reference data on a time axis and compares each stretched reference data with observation data. In the above-described configuration, the time series of the reference data are stretched on the time axis. Therefore, for example, if the time series of the reference data are stretched according to an action speed of a user, it is possible to appropriately evaluate a difference between a trajectory of a target observation point and a predetermined trajectory, as compared to a case of fixed time duration of time series of the reference data.
A second embodiment of the present invention will be described below. In the following description, the reference numerals used in the above description will be given to configuring elements having an operation and a function which are the same as those in the first embodiment, and a detailed description thereof will be appropriately omitted here.
The storage device 14 of the second embodiment stores three types of audio data W (Wx, Wy and Wz) indicating a waveform of different sounds (for example, a warning sound such as a “beeping sound” having a different pitch or a sound quality). The audio control unit 36 of the present embodiment controls the audio data Wx to be reproduced/stopped according to a comparison result where the comparison unit 34 compares the observation trajectory Oa with the reference trajectory Oref in the X-axis direction (comparison value ΔTx), controls the audio data Wy to be reproduced/stopped according to a comparison result in the Y-axis direction (comparison value ΔTy), and controls the audio data Wz to be reproduced/stopped according to a comparison result in the Z-axis direction (comparison value ΔTz). The audio signal S is generated by adding the audio data Wx, the audio data Wy and the audio data Wz.
More specifically, when the comparison value ΔT (ΔTx, ΔTy and ΔTz) in each axis direction is below the predetermined threshold value TH (when a difference is small between the observation value B and the reference value R), the audio control unit 36 stops reproducing of the audio data W corresponding to the associated axis direction. When the comparison value ΔT is beyond the threshold value TH (when the difference is large between the observation value B and the reference value R), the audio control unit 36 reproduces the audio data W. It is also possible to individually set the threshold value TH for each axis direction.
Even in the second embodiment, the same advantageous effects can be achieved as those in the first embodiment. In addition, in the second embodiment, the comparison result between the observation trajectory Oa and the reference trajectory Oref is individually reflected on the audio signal S in each axis direction. Therefore, the user U can recognize which direction of three axis directions has caused the difference between the observation trajectory Oa and the reference trajectory Oref.
That is, the audio control unit 36 selects the audio data according to an instruction from the user within a plurality of audio data items indicating different sounds, and converts pitch and/or tempo of the selected data according to the comparison result obtained by using the comparison unit, thereby generating the audio signal. In the above-described configuration, it is possible to diversify types of the reproduced sound of the audio signal as compared to a configuration of generating the audio signal by modulating one type of audio data.
Even in the third embodiment, the same advantageous effects can be achieved as those in the first embodiment. In addition, in the third embodiment, the reproduction of the audio signal S is started at the point of time when the delay time δ elapses from the action start point. Accordingly, it is possible to prevent concentration of the user U from being hindered before and after the action start point. For example, the time duration from the action start point until ball hitting is 500 milliseconds. Therefore, if the delay time δ is set to be approximately 500 milliseconds, there is an advantageous effect in that it is possible to prevent the user U from being hindered during the action from the action start point until the ball hitting, which is the time for the user U to particularly concentrate their attention. The configuration of the third embodiment (delay device 15) can also be applied to the second embodiment.
That is, the motion analysis device of the third embodiment includes the delay device 15 which delays the audio signal after the generation by using the audio control unit 36. In this configuration, since the audio signal is delayed, it is possible to prevent the concentration of the user from being hindered during a period from when the generation of the audio signal is started until the delayed time elapses, for example.
The above-described respective embodiments can be modified in various ways. Specific modification aspects will be described below. Two or more aspects which are optionally selected from the following example can be appropriately combined with one another.
(1) In each embodiment described, the change amount in the acceleration (Ax, Ay and Az) in each axis direction is set to be the observation data Db as an example. However, it is also possible to use the acceleration (Ax, Ay and Az) itself as the observation data Db. Similarly, a numerical value itself of the acceleration in each direction can be used as the reference data Dref.
In addition, an element for detecting (detector) the movement of the target observation point P (swing motion of the user U) is not limited to the acceleration sensor 20. For example, instead of the acceleration sensor 20 (or together with the acceleration sensor 20), it is also possible to use a speed sensor which detects a speed of the target observation point P or a direction sensor (for example, a gyro sensor) which detects the direction of the movement of the target observation point P.
In addition, it is also possible to identify the observation trajectory Oa from video images in which the swing motion of the user U is videotaped using a video camera.
As will be appreciated from the above description, the observation data Db may be time-series data indicating the observation trajectory Oa of the target observation point P. Similarly, the reference data Dref may be time-series data indicating the reference trajectory Oref.
(2) In each embodiment described above, the pitch in the reproduced sound is changed according to the difference between the observation trajectory Oa and the reference trajectory Oref, but the modulation method of the audio data W may be optionally used. For example, it is also possible to change a sound volume of the audio data W according to the difference between the observation trajectory Oa and the reference trajectory Oref (each comparison data Dc), for example. In addition, in a configuration where the audio control unit 36 provides the audio data W with various sound effects (for example, an echo effect), it is also possible to control the extent of the sound effect to be provided for the audio data W according to the difference between the observation trajectory Oa and the reference trajectory Oref.
As will be appreciated from the above description, the audio control unit 36 may generate the audio signal S according to the comparison result (comparison data Dc) using the comparison unit 34, and specific content in the process thereof is not limited thereto.
(3) It is also possible to selectively use a plurality of audio data W indicating different sounds. More specifically, it is preferable that the audio control unit 36 is configured so as to select and convert pitch and/or tempo of the audio data W according to an instruction of the user U out of the plurality of audio data W. For example, the plurality of audio data W indicating the wind noise generated by different types of the club C during the swing motion is stored in the storage device 14. The audio control unit 36 selects the audio data W according to the types of the club C used by the user U from the storage device 14, and generates the audio signal S by way of the modulation of the audio data W to which the comparison data Dc is applied. The types of the club C (for example, a driver or irons) are instructed from the user to the motion analysis device 10 by operating, for example, an input device.
According to the above-described configuration, it is possible to diversify the types of the reproduced sound.
(4) It is also possible to reproduce a specific sound (for example, sound effects) when the observation trajectory Oa approximates to the reference trajectory Oref. For example, the storage device 14 stores sound effect data indicating a wave form of the sound effects. For example, the sound effects are sounds such as a sound, a shout for joy, or a sound of applause when a ball enters a hole cup.
The audio control unit 36 counts the amount N of the comparison data Dc in which each comparison ΔT (ΔTx, ΔTy and ΔTz) out of the comparison data Dc which is sequentially generated by the comparison unit 34 is beyond a threshold value. When the amount N after the completion of the swing motion is below a predetermined threshold value (that is, when the observation trajectory Oa approximates to the reference trajectory Oref), the audio control unit 36 acquires sound effect data from the storage device 14 and supplies the sound effect data to the sound emitting device 16 as the audio signal S. That is, the audio signal S of the sound to which the sound effects are added immediately after of the sound indicated by the audio data W (wind noise generated by the club C during the swing motion) is reproduced.
In the above-described description, when the observation trajectory Oa approximates to the reference trajectory Oref, the sound effects are reproduced. Therefore, there is an advantage in that the user U can intuitively recognize whether their own swing motion is good or not in the observation trajectory Oa.
When the observation trajectory Oa is different from the reference trajectory Oref (when the above-described amount N is beyond the threshold value), the sound effects can be added to the audio signal S. That is, the audio control unit 36 controls whether to add the sound effects to the audio signal S according to a degree of approximation between the observation trajectory Oa and the reference trajectory Oref.
That is, in this modification example, according to a degree of approximation between a trajectory specified by observation data and a predetermined trajectory, the audio control unit 36 generates an audio signal in which predetermined sound effects are added to a sound according to a comparison result obtained by using the comparison unit 34. In this configuration, according to the degree of approximation between the trajectory of the target observation point and the predetermined trajectory, the sound according to the comparison result obtained by using the comparison unit 34 and the predetermined sound effects are reproduced. Therefore, there is an advantage in that the user can intuitively recognize whether the action is good or not.
(5) In the third embodiment, the delay device 15 delays the audio signal S by the predetermined delay time δ, but the delay time δ can be controlled so as to be variable. For example, in a configuration where the comparison unit 34 detects a point of time in the ball hitting by the club C according to a temporal change of the observation data Db (or the comparison data Dc), a configuration is employed where the delay device 15 delays the audio signal S from the action start point to the point of time a ball is hit (that is, the delay time δ is set to be the time from the action start point to the point of time a ball is hit). In the above-described configuration, the sound is not reproduced from the action start point to the point of time a ball is hit, but the sound is reproduced after the point of time a ball is hit.
(6) Each element described above as an example can be appropriately omitted. For example, it is possible to omit the storage device 14 by incorporating various data items from an external device separate from the motion analysis device 10. In addition, the sound emitting device 16 can be omitted in a configuration where the audio signal S generated by the audio control unit 36 is transmitted to the external device via a communication network or a portable recording medium and is reproduced in the sound emitting device 16 of the external device.
(7) In the first embodiment, the observation data acquisition unit 32 sequentially generates the observation data Db by using the sensor output Da supplied from the acceleration sensor 20. However, a configuration can also be employed where the observation data acquisition unit 32 receives the observation data Db which is sequentially generated by the acceleration sensor 20. That is, an element acquiring the observation data Db (observation data acquisition means) includes both an element generating the observation data Db from the detection result obtained by using the acceleration sensor 20 for itself and an element receiving the observation data Db from the external device (acceleration sensor 20).
(8) In each embodiment described above, the motion analysis device 10 which analyzes the swing motion of the golf club C has been described as an example. However, a motion in which the motion analysis device 10 can be used is not limited to the action in golf. For example, when analyzing swing motion of a bat in the baseball, swing motion of a racket in the tennis, throwing action of a fishing rod in fishing, the motion analysis system 100 (motion analysis device 10) can also be used.
(9) It is also possible to change the amount of reference data Dref for each unit time (sampling cycle) within the analysis section. For example, with regard to a section immediately before or immediately after an impact within the analysis section, it is preferable to configure the section so as to increase the amount of the reference data Dref for each time unit by comparing the section with other sections. As the section has a larger amount of reference data Dref within the unit time, the comparison between the observation data Db and the reference data Dref is performed at a shorter interval, and the observation trajectory Oa and the reference trajectory Oref are closely compared with each other. Therefore, it is possible to detailedly analyze a difference in each trajectory in the section immediately before and immediately after the impact, for example. In addition, the amount of reference data Dref is increased in a certain portion within the analysis section. Therefore, as compared to a configuration of increasing the amount in the entire analysis section, there is an advantage in that the amount of data can be reduced.
A consideration may be considered where the amount of reference data Dref is changed in advance inside and outside a predetermined section in the analysis section. However, when the comparison unit 34 increases or decreases the amount of reference data Dref by way of the interpolation process or the thinning process for the reference data series Sref, it is also possible to change the amount of reference data Dref for each unit of time inside and outside the predetermined section in the analysis section.
In addition, it is also possible to decrease the amount of reference data Dref for each unit time with regard to a section where a detailed analysis is not required within the analysis section.
The motion analysis device according to each embodiment described above is operated by hardware (electronic circuit) such as a digital signal processor (DSP) which is exclusively used to analyze motions of a user, or is also operated in cooperation of a program and a general-purpose arithmetic processing unit such as a central processing unit (CPU).
A program of the present invention causes a computer to implement an observation data acquisition process for acquiring observation data indicating a trajectory of a target observation point which moves in conjunction with action of a user; a comparison process for comparing each of a plurality of reference data indicating a predetermined trajectory of the target observation point with the observation data acquired by the observation data acquisition process; and an audio control process used to generate an audio signal according to a result of the comparison process.
The above-described program is provided by being stored in a computer-readable recording medium and is installed in the computer. For example, the recording medium is a non-transitory recording medium, and is preferably an optical recording medium (optical disk) such as CD-ROM. However, the recording medium can include any known type of recording medium such as a semiconductor recording medium and a magnetic recording medium.
In addition, for example, the program of the present invention can be provided in a distributing manner via a communication network, for example, by a distributing server device, and then can be installed in the computer.
While preferred embodiments of the invention have been described and illustrated above, it should be understood that these are exemplary of the invention and are not to be considered as limiting. Additions, omissions, substitutions, and other modifications can be made without departing from the spirit or scope of the present invention. Accordingly, the invention is not to be considered as being limited by the foregoing description, and is only limited by the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2012-279501 | Dec 2012 | JP | national |