Determining various characteristics of the movement of an individual, such as the posture, stability, and range of motion of the user, may be a useful metric regarding the health and fitness of the individual. For example, an individual may have fitness goals relating to range of motion, or an individual may wish to develop adequate mobility before undertaking an activity for which mobility would be beneficial. However, determining characteristics of movement and methods for improving mobility typically requires analysis of the movement by a knowledgeable expert.
The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features.
While implementations are described in this disclosure by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or figures described. It should be understood that the figures and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope as defined by the appended claims. The headings used in this disclosure are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean “including, but not limited to”.
Evaluation of the movement of an individual may include measurement of various characteristics, such as the posture, stability, and mobility (e.g., range of motion) of the individual. In some cases, an individual may seek to improve quality of movement as a fitness goal. In other cases, an individual may seek to improve quality of movement in preparation for undertaking an activity for which such movement may be beneficial. However, the evaluation of characteristics of an individual's movement may require subjective analysis, such as by a trainer or other expert, which may be affected by error or bias associated with an expert. After evaluating the characteristics of an individual's movement, a trainer or other expert may provide recommendations regarding activities to improve the individual's movement. However, due to the subjective nature of such an evaluation, recommended methods by which an individual may improve various characteristics of movement, such as fitness exercises intended to improve posture, stability, or mobility, may achieve suboptimal results.
Described in this disclosure are techniques for evaluating characteristics of movement of a user, providing an output indicative of the characteristics such as one or more scores, and recommending exercises or other activities that may improve the characteristics of movement. A user may be presented with instructions to perform one or more movements, such as by presenting a video or other types of instruction that demonstrate or explain performance of the movements. The user may perform the movement(s) within a field of view of a camera, which may acquire video data representing the user performing the movement(s). For example, the user may be instructed to perform a series of five selected movements, and may perform from three to five repetitions of each movement. In some implementations, prior to acquiring video data, a device associated with the camera may determine whether the user is positioned relative to the camera in a manner that would enable evaluation of the movements of the user based on the acquired video data. For example, the distance of the user relative to the camera may be determined to be within a threshold range, and a determination may be made that at least a threshold portion of the body of the user is visible within the field of view of the camera. In some implementations, data from a depth sensor, time-of-flight sensor, or other types of sensors may be used to determine a distance of the user relative to the camera or whether at least a threshold portion of the body of the user is within the field of view of the camera. For example, output from a sensor may include point cloud data that may be processed to determine locations of body parts of the user.
In some implementations, data from other sensors, in addition to video data from a camera, may be acquired. For example, movement or stability of a body part of a user may be determined using data from one or more accelerometers, movement or a position of one or more body parts of the user may be determined based on sensors that are worn, held, or positioned in an environment with the user, physiological sensors may be used to determine data indicative of one or more physiological values associated with the user, and so forth. Data from sensors may also be used to determine the positions or movement of one or more body parts of a user that may be occluded from view by a camera.
Based on the acquired video data, and in some cases other sensor data, pose data may be determined, the pose data representing positions of the body of the user during performance of the movement(s). For example, in each frame of the acquired video data, the pose of the body of the user may be determined using pose extraction, object recognition, or other image analysis techniques. Each pose may be represented by a set of points, each point representing the location and orientation of a body part of the user, such as positions of the user's knees, feet, hips, head, and so forth. The locations and orientations of one or more points of a pose may be constrained by the location of one or more other points based on a set of rules. For example, the location of a point representing a user's foot may be constrained based on the location of a point representing the user's knee, and vice versa. As another example, the location of points representing a user's wrist, shoulder, and elbow may be constrained by a rule indicating a maximum or minimum angle at which the elbow may be positioned. The determined pose data may also include segmentation data, shape data, and so forth, which may be determined based on the video data. For example, data indicating portions of a frame of a video that depict a user, a background, other objects, and so forth may also be determined.
In some implementations, the pose data may be subdivided into portions representative of particular movements, individual repetitions of movements, or other portions based on segmentation data. For example, segmentation data may indicate poses associated with a single repetition of a movement, and based on the segmentation data, portions of the pose data that are associated with individual repetitions may be identified. Identification of portions of the pose data that correspond to particular movements and repetitions of the movements may facilitate evaluation of the movement of the user, and in some cases may be used when computing a score or other output. For example, the user may perform three repetitions of a movement, and an average score based on each repetition may be used when presenting an output.
The determined pose data that is representative of the user's positions during performance of the movement(s) may then be classified based on existing movement data that associates poses of users with errors in movement. For example, movement data may include video data acquired from other users performing the same or similar movements. Pose data may be determined based on the video data acquired from other users and may be annotated with indications of errors associated with the pose data. For example, the movement data may include one or more videos representing instructors, trainers, or other experts performing the movement(s) correctly (e.g., without errors). Movement data may also include videos representing users performing the movement(s) while committing one or more errors. The videos may be associated with annotations provided by experts or other users that indicate the errors present in the videos and in some cases an indication of a severity of the error(s), or the absence of errors. In some cases, movement data may include videos representing users performing the movement(s) that have been previously evaluated using the systems described herein or through other analysis techniques performed using one or more computing devices. Independent of the source of the movement data, in some implementations, a machine learning system may be used to classify the pose data determined from the user based on the movement data. For example, using pose data determined from multiple input videos of the movement data as inputs, a neural network or other type of machine learning system may classify the poses of the user to determine corresponding poses of the movement data within a threshold level of confidence. Based on correspondence between the movement data and the pose data determined from the user, one or more errors associated with the movement(s) performed by the user and in some implementations, an indication of severity for one or more of the errors, or an absence of errors may be determined. For example, if a particular movement performed by the user closely corresponds to a set of poses included in the movement data within a threshold level of confidence, the errors associated with the poses of the movement data may be assumed to be present in the movement performed by the user.
As one example, based on the pose data, the movement of the user may be evaluated with regard to twenty possible errors. Each error may be associated with an indication of severity, such as a severity level of zero if the error did not occur in the user's movement(s), a severity level of one for a minor error, a severity level of two for a moderate error, and a severity level of three for a severe error or a failure to perform a particular movement. Continuing the example, the user may be instructed to perform five movements, and each movement may correspond to one or more possible errors. In some cases, multiple movements may be associated with one or more of the same errors. In other cases, a movement may be associated with a set of errors that is separate and independent from the sets of errors associated with other movements. The errors, or lack of errors, that are determined to be associated with the movement(s) of the user, based on correspondence between the pose data and the movement data, may be used to determine an output indicative of characteristics of the movement of the user.
For example, score data may associate errors with score values, which may be used to determine one or more scores that represent a characteristic of the user's movement. Continuing the example, scores representative of the movement of a user may include a first score indicative of a mobility of the user's shoulder, a second score indicative of a stability of the user's shoulder, a third score indicative of a mobility of the user's hip, a fourth score indicative of a stability of the user's hip, a fifth score indicative of a mobility of the user's lower body, a sixth score indicative of a stability of the user's body, and so forth. Each score may be determined based on one or multiple score values, which may be determined based on correspondence between the score data and the errors associated with the pose data. In some cases, a single error may contribute to multiple scores, and the extent to which a particular error contributes to a particular score may be determined based on one or more weight values. For example, an error in the position of a user's arm may significantly contribute to a score indicative of stability of the user's shoulder, may slightly contribute to a score indicative of stability of the user's core, and may not contribute to a score indicative of stability of the user's lower body. A severity level associated with an error may also affect the extent to which the error contributes to a score. For example, a severe error regarding the placement of a user's foot may significantly affect the extent to which the error contributes to a score indicative of stability of the user's lower body, a minor error regarding the placement of the user's foot may only slightly affect the score, and the absence of an error may not affect the score or may affect the score in a direction opposite the direction in which a score is affected by the presence of an error.
Based on the score data, each of the errors associated with the pose data may be used to determine at least one score representative of a characteristic of the movement of the user, which may be presented in an output. In some implementations, an output may also include an indication of one or more activities that may be performed to improve characteristics of the user's movement. For example, activity data may associate one or more activities to improve characteristics of movement with corresponding errors. Based on correspondence between the activity data and the particular errors associated with the pose data of the user, one or more activities to improve characteristics of the user's movements may be determined. For example, in response to errors associated with stability of a user's core, one or more activities may include exercises intended to strengthen core muscles and improve core stability, a recommendation of one or more items or services that may facilitate improvement of stability or other characteristics of movement. One or more of the determined activities may also be presented in the output in addition to or in place of the score(s) representative of characteristics of the user's movement. In some implementations, in cases where multiple errors are determined, one or more of the errors may be associated with a priority level. In such a case, activities associated with an error having a higher priority may be presented in place of other activities associated with an error having a lower priority. Additionally, in some implementations, it may be determined based on the pose data that the body of the user is constrained from performance of one or more positions. In such a case, activities that include this position may not be recommended, or an indication of one or more modifications to the activity may be included in the output. In other implementations, if no errors in movement are determined, activities that correspond to particular poses of the user or scores determined for the user may be determined and recommended. Additionally, in some cases, if no errors in the movement of the user are determined, an activity that may facilitate retention of sufficient mobility or stability and maintaining the ability to perform the movements without occurrence of errors may be recommended.
At a subsequent time, the user may perform the movement(s) within a field of view of a camera, and additional video data may be acquired. The additional video data may be analyzed using the same processes described previously to determine errors associated with the movement(s), or the absence of errors, and one or more scores based on the errors or absence thereof. In some implementations, an output may include scores determined at multiple times, enabling a user to visualize improvement in characteristics of movement over time, or recognize types of movement or regions of the body where improvement may be necessary. An indication of subsequent activities may be included in an output based on the errors determined at the subsequent time. In some implementations, if an indication of a particular activity has previously been provided to a user within a threshold time period, a different activity may be recommended to prevent frequent repetition of identical recommendations.
Implementations described herein may therefore enable characteristics of the movement of a user to be evaluated without requiring subjective analysis by an expert, though use of an existing body of movement data that may be used to classify a video or other data acquired from the user. Errors determined from the video, or other data that is acquired from the user, may be used to determine one or more scores indicative of particular characteristics of movement and particular regions of the body, which may be presented to the user as meaningful indications of movement characteristics that may represent areas where the user excels or where improvement is possible. Additionally, based on the movements of the user, specific activities that address errors associated with the movement(s) of the user may be determined and presented in an output, enabling a user to potentially improve characteristics of movement through performance of the activities. While implementations described herein include use of a camera to acquire video data, in other implementations, other types of sensors, such as depth sensors, time-of-flight sensors (e.g., lidar), and so forth may determine point cloud data that may be processed to determine locations of body parts of a user. For example, pose data may be determined based on point cloud data or other types of data from a sensor in addition to or in place of video data acquired using a camera.
While
One or more servers 110 may acquire the video data 104, and in some implementations the sensor data 105, from the user device 106. While
An image analysis module 112 associated with the server(s) 110 may determine pose data 114 based on the acquired video data 104, and in some implementations, the acquired sensor data 105. The pose data 114 may represent one or more positions of the user 102 during performance of the movement(s). For example, the pose data 114 may include a determined position for at least a subset of the frames of the video data 104, and each position (e.g., pose) may be represented by a set of points, each point representing the location and orientation of a body part of the user 102. In some implementations, the image analysis module 112 may include one or more object recognition or segmentation algorithms that may identify portions of frames of video data 104 in which the user 102 is visible. For example, an object recognition algorithm may determine portions of a frame of video data 104 that correspond to particular body parts of the user 102, each of which may be represented as a point within a pose. In some implementations, the image analysis module 112 may include algorithms for determining locations of one or more body parts of the user 102 based on sensor data 105, such as data indicative of the location of a sensor 103 or movement of a sensor 103. The locations and orientations of one or more points may be constrained by the location of one or more other points based on a set of rules. In some implementations, the pose data 114 may associate each point of a pose with an identifier of the point with a particular location or orientation of the point. In some implementations, data regarding a point may also indicate movement of the point, a confidence value associated with the location of the point, and so forth. In some implementations, the pose data 114 may also include segmentation information, shape information, information regarding a three-dimensional position of an individual or other object (such as information determined using a depth (e.g., RGB-D) camera), and so forth that may indicate portions of video data 104 that include the user 102, a background, one or more other objects, and so forth. The pose data 114 may also include time data or frame data indicative of a frame or time associated with one or more poses represented by the pose data 114. For example, the pose data 114 may associate a first frame identifier or first time data indicative of a first time with a first set of points indicative of a first position of the user 102, and a second frame identifier or second time data with a second set of points indicative of a subsequent position of the user 102, and so forth.
In some implementations, the pose data 114 may be further analyzed or processed based on segmentation data. For example, segmentation data may include data indicative of poses or other types of data representing particular movements or repetitions of movements. Based on the segmentation data, a portion of the pose data 114 that corresponds to a particular movement or a particular repetition of a movement may be determined. In some implementations, based on the segmentation data, one or more portions of the pose data 114 that do not correspond to any of the movements may be determined and disregarded from analysis, such as to conserve computational resources, reduce data transmission, and so forth.
An error determination module 116 associated with the server(s) 110 may determine error data 118 based on correspondence between the pose data 114 and movement data 120. The movement data 120 associates pose data 114 determined from one or more videos, or sources of pose data 114, with corresponding errors represented by the pose data 114. For example, the movement data 120 may include one or more videos, or pose data 114 determined from the one or more videos, that represent other users 102 performing the movement(s). The poses indicated in the movement data 120 may be annotated, such as through expert review or an automated system, to determine errors represented by particular poses, and the absence of errors associated with particular poses. Continuing the example, a first video of the movement data 120 may include an instructor, trainer, or other expert performing the movement(s) correctly, representing the movement(s) performed without errors, a second video may include an expert or non-expert user 102 performing the movement(s) with a first set of errors, a third video may include an expert or non-expert user 102 performing the movement(s) with a second set of errors, and so forth.
In some implementations, the error determination module 116 may include a neural network or other type of machine learning system, which may classify the pose data 114 determined from the user 102 based on the movement data 120. For example, if a portion of the pose data 114 associated with a particular movement corresponds to a portion of the movement data 120 associated with the particular movement within a threshold level of confidence, this may indicate that the errors, or lack of errors, associated with the corresponding portion of the movement data 120 may also have occurred during performance of the movement by the user 102. Based on the correspondence between the movement data 120 and the pose data 114, the determined error data 118 may indicate particular errors that are associated with the pose data 114, an indication of the severity of one or more of the errors, and in some cases a lack of errors. For example, the movement data 120 may associate the absence of errors, one or more errors, or one or more indications of severity for the error(s), with each portion of the movement data 120 that corresponds to a particular movement.
The error data 118 associated with the pose data 114 determined form the movement(s) of the user 102 may be used to determine one or more characteristics of the movement(s). For example, a scoring module 122 associated with the server(s) 110 may determine one or more score values 124 based on the error data 118 and score data 126 that associates particular error data 118 with corresponding score values 124. In some implementations, the scoring module 122 may determine multiple score values 124 indicative of characteristics of the movement(s) of the user 102. In some cases, individual score values 124 may be associated with particular regions of the body of the user 102. For example, a first score value 124 may be indicative of a posture of the user 102, a second score value 124 may be indicative of a mobility of a shoulder of the user 102, a third score value 124 may be indicative of a stability of the shoulder of the user 102, a fourth score value 124 may be indicative of a stability of a core of the user 102, a fifth score value 124 may be indicative of a mobility of a hip of the user 102, a sixth score value 124 may be indicative of a stability of the hip of the user 102, a seventh score value 124 may be indicative of a mobility of a lower body of the user 102, and an eighth score value 124 may be indicative of a stability of the lower body of the user 102. In some implementations, the score value(s) 124 may include one or more average or aggregate values determined from one or more individual score values 124. For example, one or more determined score values 124 may include an overall indication of a mobility of the user 102, an overall indication of a stability of the user 102, and an overall indication of a posture of the user 102, each of which may be determined based on individual score values 124 associated with the corresponding characteristic of the movement of the user 102 for a particular body part. In some implementations, an overall movement score may be determined based on one or more of the individual score values 124. In some cases, one or more individual score values 124 may be weighted differently when determining one or more aggregate or average score values 124. For example, a score value 124 representing stability of the shoulder of the user 102 may be weighed less when determining an overall score value 124 for stability than a score value 124 representing stability of the lower body of the user 102.
Based on the score data 126, the error data 118 may affect one or multiple score values 124. For example, the score data 126 may indicate a weight associated with each error, which may vary depending on the particular score value 124. For example, the score data 126 may include rules, weights, algorithms, and so forth that indicate the effect of a particular error, and in some cases an indication of a severity of the error, when determining particular score values 124 representative of a characteristic of the movement of the user 102. Continuing the example, an error associated with a position of the shoulder of the user 102 may significantly affect a score value 124 indicative of a stability of the shoulder, may slightly affect a score value 124 indicative of a stability of the core, and may not affect a score value 124 indicative of a stability of the lower body. As another example, an error associated with an indication of high severity may significantly affect a score value 124 while an error associated with an indication of low severity may only slightly affect the score value 124. As yet another example, the absence of an error associated with the position of a body part may not affect a score associated with movement of the user 102, or may cause the score to be modified in a direction opposite the direction that the presence of one or more errors may cause the score to be modified.
The error data 118 associated with the pose data 114 may also be used to determine one or more activities that may improve characteristics of movement of the user 102, such as by reducing the severity or occurrence of errors, improving stability or mobility associated with movement of the user 102, increasing one or more suboptimal score values 124, maintaining one or more acceptable or optimal score values, and so forth. For example, a recommendation module 128 associated with the server(s) 110 may determine correspondence between the error data 118 and activity data 130 that associates the error data 118 or score values 124 with corresponding activities to determine activity information 132 indicative of one or more activities that correspond to the errors, or lack of errors, associated with the movement(s) of the user 102. Continuing the example, the activity data 130 may associate an error or set of errors with one or more corresponding activities that may be used to reduce occurrence or severity of the errors. In some cases, the activity data 130 may associate indications of severity for particular errors with corresponding activities. For example, a particular activity may be suitable to improve a severe error but less suitable to improve a minor error, or alternatively, an activity may be unsuitable to perform if the user 102 commits a particular error when performing a movement, but suitable if the user 102 does not commit the error.
An output module 134 associated with the server(s) 110 may determine output data 136 based on the determined score values 124 and activity information 132. At least a portion of the output data 136 may be provided to the user device 106 to cause presentation of a second output 108(2) that may include an indication of one or more score values 124 and in some implementations, an indication of at least a portion of the activity information 132. For example,
As shown in
As described with regard to
As described with regard to
The particular score values 124 associated with the movement of a user 102 may be determined based on correspondence between the error data 118 associated with the movement of the user 102 and score data 126. For example,
As described with regard to
For example,
As shown in
Specifically,
As described with regard to
The intermediate values 303 may be used to determine score values 124 associated with characteristics of movement of the user 102 over a period of time, such as during performance of one or more movements represented by the video data 104. For example, a particular error, and in some cases a severity level 304 associated with the error, may cause a particular value or set of values 305 to be determined, and a particular score value 124 to be determined based on the set of values 305. In other cases, a particular error may cause a particular modification to be applied to a score value 124. Different errors may cause different score values 124 or modifications to score values 124. For example, different score values 124 may be determined for different regions of the body of the user 124, such as the shoulder, core, hip, or lower body, and different score values 124 may be determined for different characteristics of movement, such as posture, stability, and mobility. Additionally, in some cases, one or more score values 124 may be determined based in part on other score values 124, such as a score value 124 indicative of overall stability of a user 102 during performance of the movements based on individual score values 124 indicative of the stability of each region of the body of the user 102. The score data 126 may include one or more rules, algorithms, weight values, and so forth that may determine the manner in which particular score values 124 are determined based on error identifiers 302, severity levels 304, and other score values 124.
For example,
The first region identifier 306(1) is also shown associated with a second characteristic identifier 308(2), such as an indication of posture or mobility. The second characteristic identifier 308(2) is associated with a second quantitative identifier 310(2). For example, in addition to the score of 72% for stability of the shoulder of the user 102, a score of 89% for mobility of the shoulder of the user 102 may be determined. While
In some implementations, one or more score values 124 may be determined based on one or more other score values 124. For example,
As described with regard to
As described with regard to
As described with regard to
At 502, a determination may be made that at least a threshold portion of a body of a user 102 is visible within a field of view of a camera. For example, a user device 106 associated with a camera may determine whether the user 102 is positioned relative to the camera in a manner that would enable evaluation of the movements of the user 102 based on acquired video data 104. In some implementations, such a determination may include determining a distance of the user 102 relative to the camera to be less than a threshold maximum distance and greater than a threshold minimum distance. In other implementations, such a determination may include determining that selected parts of the body of the user 102 are able to be identified using object recognition, pose extraction, or other types of algorithms. For example, a determination may be made that at least a threshold number of body parts of the user 102 that correspond to points included within a determined pose are able to be identified. In some implementations, a position of a user may be determined using a depth sensor, time-of-flight sensor, or other types of sensors in addition to or in place of a camera. For example, a sensor may generate point cloud data, which may be processed to determine locations of one or more parts of the body of the user 102.
At 504, a first output 108 may be presented. The first output 108 may include instructions for performing movements. For example, the first output 108 may include a video, text instructions, audio instructions, one or more images, or another type of data that may instruct a user 102 regarding performance of one or more movements. The user 102 may attempt to perform the movements within a field of view of a camera during or after viewing the first output 108.
At 506, data representing the user 102 performing the movements may be acquired using a camera or other sensors 103. For example, the user device 106 presenting the first output 108 may also include a camera, or a separate camera associated with the user device 106 or with a different computing device may be in an environment with the user 102, and the user 102 may perform the movements within the field of view of the camera. Video data 104 acquired using the camera may be processed and analyzed by the user device 106 or transmitted to another device, such as one or more servers 110. In other implementations, a combination of the user device 106 and one or more other computing devices may process and analyze the video data 104. Sensor data 105, such as data from one or more accelerometers indicating movement of the user 102, data from position sensors, location sensors, or touch sensors indicating a position of one or more body parts of the suer 102, and so forth may also be analyzed by the user device 106 or transmitted to another device. In some implementations, a depth sensor, time-of-flight sensor, or other types of sensors may be used to determine a position of one or more body parts of the suer 102, such as by outputting point cloud data, which may be processed to determine pose data 114.
At 508, pose data 114 may be determined based on the acquired data. The pose data 114 may represent positions of the body of the user 102 during performance of the movements. For example, one or more object recognition algorithms, shape recognition algorithms, segmentation algorithms, and so forth may identify portions of frames of video data 104 in which the user 102 is visible, and portions of frames of video data 104 that correspond to particular body parts of the user 102. Body parts of the user 102 may be represented as points within a pose. The locations and orientations of one or more points may be constrained by the location of one or more other points based on a set of rules. In some implementations, data regarding a point may also indicate movement of the point, a confidence value associated with the location of the point, and so forth. In some implementations, the pose data 114 may also include segmentation information, shape information, information regarding a three-dimensional position of an individual or other object, and so forth. For each set of points, the pose data 114 may also include time data or frame data, such as a frame identifier 202 indicative of a frame or time associated with one or more poses represented by the pose data 114. For example, the pose data 114 may associate a first frame identifier 202 or first time data with a first set of points, a second frame identifier 202 or second time data with a second set of points, and so forth. In some implementations, the pose data 114 may be analyzed or processed based on segmentation data. For example, segmentation data may include data indicative of poses or other types of data representing particular movements or repetitions of movements. Based on the segmentation data, a portion of the pose data 114 that corresponds to a particular movement or a particular repetition of a movement may be determined. In some implementations, based on the segmentation data, one or more portions of the pose data 114 that do not correspond to any of the movements may be determined and disregarded from analysis, such as to conserve computational resources, reduce data transmission, and so forth.
At 510, error data 118 indicative of errors during performance of the moments may be determined based on correspondence between the pose data 114 and movement data 120. The movement data 120 may associate poses with errors, severity levels of errors, or the absence of errors. For example, movement data 120 may include one or more videos, pose data 114, sets of points representing poses, or other types of inputs provided by content curators or other types of expert or non-expert users. In other cases, movement data 120 may include videos, pose data 114, or sets of points determined from the user 102 or other users 102 performing the movements. The movement data 120 may associate point data 204 indicative of a set of points representing a position of a user 102 with corresponding error data 118 indicative of one or more errors associated with the position, a severity level 304 associated with the error(s), or the absence of errors. In some implementations, the error data 118 associated with performance of the movements by the user 102 may be determined using a machine learning system to classify the pose data 114 of the user 102 using the movement data 120.
At 512, one or more score values 124 representing characteristics of movement of the user 102 may be determined based on correspondence between the error data 118 and score data 126. The score data 126 may associate the error data 118 with score values 124. For example, as described with regard to
At 514, activity information 132 representing one or more activities to improve performance of the movements may be determined. The activity information 132 may be determined based on correspondence between the error data 118 and activity data 130 that associates error data 118 with activity information 132. For example, performance of a particular activity may possibly reduce occurrence or severity of one or more particular errors, improve one or more characteristics of movement, improve one or more score values 124, and so forth. In some implementations, one or more errors may be associated with a priority level. In such a case, activity information 132 associated with an error having a higher priority may be presented in place of or prior to activity information 132 associated with an error having a lower priority. Additionally, in some implementations, when particular activity information 132 that has been previously presented to a user 102 within a threshold length of time, other activity information 132 may be presented in place of or prior to the particular activity information 132. In some implementations, activity information 132 may include multiple activities that may be performed over a period of time to improve one or more characteristics of the movement of a user 102.
At 516, a second output 108 may be presented using the display, the second output 108 including at least a portion of the determined score values 124 and activity information 132. In some implementations, the second output 108 may include a user interface 402, as described with regard to
One or more power supplies 604 may be configured to provide electrical power suitable for operating the components of the computing device 602. In some implementations, the power supply 604 may include a rechargeable battery, fuel cell, photovoltaic cell, power conditioning circuitry, and so forth.
The computing device 602 may include one or more hardware processor(s) 606 (processors) configured to execute one or more stored instructions. The processor(s) 606 may include one or more cores. One or more clock(s) 608 may provide information indicative of date, time, ticks, and so forth. For example, the processor(s) 606 may use data from the clock 608 to generate a timestamp, trigger a preprogrammed action, and so forth.
The computing device 602 may include one or more communication interfaces 610, such as input/output (I/O) interfaces 612, network interfaces 614, and so forth. The communication interfaces 610 may enable the computing device 602, or components of the computing device 602, to communicate with other computing devices 602 or components of the other computing devices 602. The I/O interfaces 612 may include interfaces such as Inter-Integrated Circuit (I2C), Serial Peripheral Interface bus (SPI), Universal Serial Bus (USB) as promulgated by the USB Implementers Forum, RS-232, and so forth.
The I/O interface(s) 612 may couple to one or more I/O devices 616. The I/O devices 616 may include any manner of input devices or output devices associated with the computing device 602. For example, I/O devices 616 may include touch sensors, displays, touch sensors integrated with displays (e.g., touchscreen displays), keyboards, mouse devices, microphones, image sensors, cameras, depth sensors, time-of-flight detection systems such as lidar, scanners, speakers or other types of audio output devices, haptic devices, printers, and so forth. In some implementations, the I/O devices 616 may be physically incorporated with the computing device 602. In other implementations, I/O devices 616 may be externally placed. I/O devices 616 may also include sensors 103 for determining the location, orientation, or movement of one or more body parts of a user 102, such as accelerometers, positions sensors, depth sensors, time-of-flight sensors, and so forth.
The network interfaces 614 may be configured to provide communications between the computing device 602 and other devices, such as the I/O devices 616, routers, access points, and so forth. The network interfaces 614 may include devices configured to couple to one or more networks including local area networks (LANs), wireless LANs (WLANs), wide area networks (WANs), wireless WANs, and so forth. For example, the network interfaces 614 may include devices compatible with Ethernet, Wi-Fi, Bluetooth, ZigBee, Z-Wave, 3G, 4G, 5G, LTE, and so forth.
The computing device 602 may include one or more buses or other internal communications hardware or software that allows for the transfer of data between the various modules and components of the computing device 602.
As shown in
The memory 618 may include one or more operating system (OS) modules 620. The OS module 620 may be configured to manage hardware resource devices such as the I/O interfaces 612, the network interfaces 614, the I/O devices 616, and to provide various services to applications or modules executing on the processors 606. The OS module 620 may implement a variant of the FreeBSD operating system as promulgated by the FreeBSD Project; UNIX or a UNIX-like operating system; a variation of the Linux operating system as promulgated by Linus Torvalds; the Windows operating system from Microsoft Corporation of Redmond, Wash., USA; or other operating systems.
One or more data stores 622 and one or more of the following modules may also be associated with the memory 618. The modules may be executed as foreground applications, background tasks, daemons, and so forth. The data store(s) 622 may use a flat file, database, linked list, tree, executable code, script, or other data structure to store information. In some implementations, the data store(s) 622 or a portion of the data store(s) 622 may be distributed across one or more other devices including other computing devices 602, network attached storage devices, and so forth.
A communication module 624 may be configured to establish communications with one or more other computing devices 602. Communications may be authenticated, encrypted, and so forth.
The memory 618 may also store the image analysis module 112. The image analysis module 112 may determine pose data 114 based on stored or acquired video data 104, and in some implementations, sensor data 105 acquired using one or more sensors 103. In some implementations, the image analysis module 112 may include one or more object recognition or segmentation algorithms that may identify portions of frames of video data 104 in which a user 102 is visible. In some implementations, the image analysis module 112 may use one or more object recognition algorithms, or other techniques, to determine portions of frames of video data 104 that correspond to particular body parts of a user 102. Pose data 114 may represent the determined positions of parts of a body as a set of points. The locations and orientations of one or more points may be constrained by the location of one or more other points based on a set of rules. In some implementations, each point of a pose may associate an identifier of the point with a particular location or orientation of the point. In some implementations, data regarding a point may also indicate movement of the point, a confidence value associated with the location of the point, and so forth. In some implementations, the pose data 114 may also include segmentation information, shape information, information regarding a three-dimensional position of an individual or other object, and so forth. The pose data 114 may also include data indicative of a frame of video data 104 or a time associated with one or more poses represented by the pose data 114. While the image analysis module 112 is described with regard to analysis of video data 104 received from a user 102, in some implementations, the image analysis module 112 may also be used to generate movement data 120. For example, video data 104 representing performance of movements by trainers, experts, non-experts, content curators, and so forth may be analyzed to determine pose data 114 that may be used to classify subsequent video data 104 received from users 102.
The memory 618 may additionally store the error determination module 116. The error determination module 116 may determine error data 118 based on correspondence between the pose data 114 and movement data 120. The movement data 120 associates pose data 114 determined from one or more videos, the pose data 114, or sets of points representing positions with corresponding error data 118. The error data 118 may indicate particular errors, severity levels associated with one or more errors, or an absence of errors. The poses indicated in the movement data 120 may be annotated, such as through expert review or an automated system, to indicate error data 118 represented by particular poses. In some implementations, the error determination module 116 may include a neural network or other type of machine learning system, which may classify pose data 114 received from a user 102 performing a set of movements, or another source, based on the movement data 120. For example, if a portion of the pose data 114 associated with a particular movement corresponds to a portion of the movement data 120 associated with the particular movement within a threshold level of confidence, this may indicate that the errors, severity levels 304 of the errors, or absence of errors associated with the corresponding portion of the movement data 120 may also have occurred during performance of the movement by the user 102.
The memory 618 may also store the scoring module 122. The scoring module 122 may determine one or more score values 124 associated with movement of a user 102 based on correspondence between error data 118 associated with the user 102 and score data 126. The score data 126 may associate particular errors, severity levels, or absence of errors with corresponding score values 124. In some implementations, the scoring module 122 may determine multiple score values 124 indicative of characteristics of the movement(s) of the user 102. For example, determined score values 124 may include individual score values 124 associated with particular regions of the body of the user 102 and particular characteristics of movement. In some implementations, score value(s) 124 may include one or more average or aggregate values determined from one or more other score values 124. For example, one or more determined score values 124 may include an overall indication of a mobility of the user 102, an overall indication of a stability of the user 102, an overall indication of a posture of the user 102, an overall indication of quality of movement of the user 102, and so forth. In some cases, one or more individual score values 124 may be weighted differently when determining one or more aggregate or average score values 124. The score data 126 may include rules, weights, algorithms, and so forth that indicate the effect of a particular error, and in some cases an indication of a severity of the error, when determining particular score values 124.
The memory 618 may store the recommendation module 128. The recommendation module 128 may determine activity information 132 that corresponds to error data 118 associated with a user 102. For example, the recommendation module 128 may determine correspondence between the error data 118 and activity data 130 that associates errors, severity levels 304 of errors, or the absence of one or more errors with corresponding activities to determine activity information 132 indicative of one or more activities that correspond to the movement(s) of the user 102. Performance of the corresponding activities may be used to reduce occurrence or severity of errors, improve characteristics of movement, improve score values 124, and so forth.
The memory 618 may also store the output module 134. The output module 134 may determine output data 136 based on the determined score values 124 and activity information 132. In some implementations, the output module 134 may also access one or more rules, algorithms, templates, formats, and so forth that may be used to control the format, layout, style, or other characteristics of a user interface 402 or other type of output 108 that is presented. As one example, the output 108 may include a user interface 402, that presents score values 124 in association with indications of regions of the body of the user 102 and movement characteristics, other score values 124 determined based in part on the individual score values 124, and links to access or controls to initiate playback of activity information 132.
Other modules 626 may also be present in the memory 618. For example, other modules 626 may include permission or authorization modules to enable a user 102 to provide authorization to acquire video data 104 of the user 102. For users 102 that do not opt-in or otherwise authorize acquisition of video data 104 that depicts the user 102, generation, transmission, or use of such video data 104 may be prevented. Other modules 626 may also include encryption modules to encrypt and decrypt communications between computing devices 602, authentication modules to authenticate communications sent or received by computing devices 602, a permission module to assign, determine, and manage user permissions to access or modify data associated with computing devices 620, user interface modules to generate interfaces for receiving input from users 102, such as selection of video data 104 for presentation, and so forth. Other modules 626 may also include modules for acquiring data using sensors 103. For example, in addition to acquiring data indicative of a position of a user 102 using one or more cameras, in some implementations, sensors 103 may be worn, held, or positioned in an environment with a user 102. The sensors 103 may generate sensor data 105 indicative of a location of the sensor 103 or of a body part of the user 102 that is associated with the sensor 103, which may be used in addition to data acquired using a camera.
Other data 628 within the data store(s) 622 may include configurations, settings, preferences, and default values associated with computing devices 602. Other data 628 may also include encryption keys and schema, access credentials, and so forth. Other data 628 may additionally include formats, layouts, or templates for presentation of user interfaces 402 or other types of output. Other data 628 may include threshold data, such as threshold confidence values for determining errors based on pose data 114 from a user 102.
In different implementations, different computing devices 602 may have different capabilities or capacities. For example, servers 110 may have greater processing capabilities or data storage capacity than user devices 106.
The processes discussed in this disclosure may be implemented in hardware, software, or a combination thereof. In the context of software, the described operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more hardware processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Those having ordinary skill in the art will readily recognize that certain steps or operations illustrated in the figures above may be eliminated, combined, or performed in an alternate order. Any steps or operations may be performed serially or in parallel. Furthermore, the order in which the operations are described is not intended to be construed as a limitation.
Embodiments may be provided as a software program or computer program product including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described in this disclosure. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage media may include, but is not limited to, hard drives, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further, embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet.
Separate instances of these programs can be executed on or distributed across any number of separate computer systems. Although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case, and a variety of alternative implementations will be understood by those having ordinary skill in the art.
Additionally, those having ordinary skill in the art will readily recognize that the techniques described above can be utilized in a variety of devices, environments, and situations. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.