This disclosure relates generally to ergonomics and, more particularly, to extended reality systems, apparatus, and methods for musculoskeletal ergonomic improvement.
Extended reality devices such as augmented reality headsets can generate environments that combine reality with digital features. For instance, a user wearing an augmented reality headset can be guided to perform an action in the real world via information provided in a digital format, where the digital content appears in the user's environment.
An example apparatus includes an avatar generator to generate an avatar based one or more properties of a user; an avatar position analyzer to determine a first ergonomic form for a movement based on the one or more properties of the user, the avatar generator to cause an output device to display the avatar in the first ergonomic form; and a feedback generator to determine a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user; generate a graphical representation of the user in the second form; and cause the output device to display the graphical representation of the user with the avatar.
An example system includes a first sensor and an extended reality coach controller to execute a neural network model to generate an avatar illustrating a first ergonomic position for a movement; cause an extended reality device to output the avatar; determine a second position of a body part of a user based on first sensor data generated by the first sensor; perform a comparison of the first ergonomic position and the second position; and cause the extended reality device to output graphical feedback based on the comparison
An example non-transitory computer readable medium comprising instructions that, when executed by at least one processor, cause the at least one processor to generate an avatar based one or more properties of a user; determine a first ergonomic form for a movement based on the one or more properties of the user; cause an output device to display the avatar in the first ergonomic form; determine a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user; generate a graphical representation of the user in the second form; and cause the output device to display the graphical representation of the user with the avatar.
An example method includes generating an avatar based one or more properties of a user; determining a first ergonomic form for a movement based on the one or more properties of the user; causing an output device to display the avatar in the first ergonomic form; determining a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user; generating a graphical representation of the user in the second form; and causing the output device to display the graphical representation of the user with the avatar
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc. are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
A user interacting with an extended reality device, such as an augmented reality headset, can be guided to perform an action in the real world via information provided in a digital format, where the digital content appears in the user's environment. The digital content can include an avatar, or a graphical representation of a person or character.
An individual may experience a musculoskeletal injury (e.g., an injury to muscle(s), nerve(s), and/or joint(s) of the individual's body) while performing tasks. Such injuries can stem from conditions in a work environment and/or a manner in which the activities are performed. For instance, performing repetitive tasks, lifting heavy objects, and/or other types of overuse or overexertion activities can cause musculoskeletal injuries that, in addition to causing pain, may affect worker productivity. Efforts to reduce musculoskeletal injuries are often not addressed until the worker is experiencing pain.
Disclosed herein are example systems, apparatus, and methods that generate an avatar or digital coach to demonstrate reference, correct, or otherwise optimal ergonomic form(s) for performing movement(s) in connection with a movement and/or task to be performed by a user, such as lifting a box, installing a component, etc. Examples disclosed herein generate the avatar based on features of the user, such as height, age, gender, etc. of the user. Examples disclosed herein execute neural network model(s) to determine reference or optimal ergonomic forms for the user in performing the movement(s), such as body part position(s), posture(s), muscle tension level(s), speed(s) at which the movement(s) are to performed, etc. Examples disclosed herein implement the neural network models based on properties of the user and/or users with similar properties as the user of interest to develop customized recommendations for the user based on the user's body type, musculoskeletal injury history, etc. to provide ergonomic form recommendations to the user when performing movement(s) associated with the task.
Examples disclosed herein provide the user with feedback indicative of a performance of the user relative to the ergonomic forms presented by the avatar. Some examples disclosed herein analyze sensor data collected from the user and/or the environment in which the user is located to determine positions of body parts and/or other movement characteristics of the user. Examples disclosed herein compare the positions and/or other movement characteristics of the user to the ergonomic forms determined by the neural network analysis. The sensor data can include, for instance, position data (e.g., accelerometer data), muscle strain data, image data, etc. In some examples, the feedback includes graphical representations of the user presented with the avatar (e.g., a graphical representation of the user overlaying an image of the avatar). Such feedback shows differences between the user's form and the reference or optimal form presented by the avatar. Examples disclosed herein use extended reality to provide customized recommendations and feedback to guide the user in performing movements in accordance with recommended ergonomic forms.
The example system 100 of
The example XR device 104 includes a display 106. As disclosed herein, the display 106 provides means for presenting extended reality content to the user 102. The extended reality content can include, for example, virtual reality content, augmented reality content, and/or mixed reality content depending on, for example, the type of XR device 104 (e.g., a VR headset) and/or the type of content to be presented (e.g., mixed reality content to facilitate training stimulations in the real-world with virtual guidance). In the example of
The example system 100 includes one or more sensors to collect data from the user 102 and/or the environment 103 with respect to movement(s) performed by the user 102. In particular, the sensor(s) collect data associated with the user 102 during use of the XR device 104 (e.g., during presentation of and/or interaction with the avatar or digital coach). For example, the sensor(s) can include user position sensors 112 to generate data indicative of movement of one or more portions of the body of the user 102. The user position sensor(s) 112 can include motion captures sensor(s), accelerometer(s), etc. to output data indicative of change(s) in position of one or more portion(s) of the body of the user 102 (e.g., arms, legs, wrists, etc.). In some examples, the user position sensor(s) 112 include weight, pressure, and/or load sensor(s) to detect changes in weight transfer between one or more portions of the user's body (e.g., between the user's feet). The user position sensor(s) 112 can be carried by the user 102, by the XR device 104, and/or by other user device(s) 114 (e.g., a smartwatch, a smartphone, etc.) carried by the user 102 and/or located in the environment 103
The example system 100 includes one or more strain sensor(s) 116 to detect strain and/or stress on joint(s) of the user 102 and/or with respect to the muscle(s) of the user 102. The strain sensor(s) 116 can include electromyography (EMG) sensor(s) worn by the user 102 to detect muscle tension. In some examples, the strain sensor(s) 116 include sensor(s) to detect skin and/or muscle temperature, which are indicative of muscle activity. In some examples, the strain sensor(s) 116 include fabric sensing wearable(s). The fabric sensing wearable(s) include wearable fabrics (e.g., a shirt or other garment) that include sensor(s) to output data indicative of strain on the muscle(s) and/or skeleton (e.g., joint(s)) of the user 102. For example, motion-sensing fabrics can include pressure and/or strain sensor(s) that output signal(s) in response to changes in pressure and/or deformation of the sensor(s) during movement by the user 102. The strain sensor(s) 116 can be carried by the user 102 and/or by user device(s) 104, 114 associated with the user 102.
The example system 100 includes image sensor(s) 118 (e.g., camera(s)) to generate image data of the user 102 in the environment 103. For example, the image sensor(s) 118 may be located in a room in the environment 103 in which the user 102 is performing the task(s) (e.g., in the manufacturing facility of
The example system 100 can include other types of sensors than the example sensors 112, 116, 118 disclosed herein. Also, in some examples, the system 100 includes fewer types of sensor(s).
In the example of
In some examples, as represented in
In other examples, the signals output by one or more of the user position sensor(s) 112, the strain sensor(s) 116, and/or the image sensor(s) 118 are transmitted to, for instance, the processor 108 of the XR device 104 for processing before being transmitted to the XR coach controller 120 (i.e., in examples where the XR coach controller 120 is implemented by processor(s) and/or cloud-based device(s) different than the on-board processors 108 of the XR device 104). For example, the processor 108 of the XR device 104 can perform operations such as removing noise from the signal data, and/or converting the signal data from analog to digital data. In such examples, the on-board processor 108 of the XR device 104 is in communication (e.g., wireless communication) with the XR coach controller 120. Additionally or alternatively, the pre-processing can be performed by the processor(s) 122 of the other user device(s) 114.
In some examples, the XR coach controller 120 receives sensor data from the user position sensor(s) 112, the strain sensor(s) 116, and/or the image sensor(s) 118; from the processor 108 of the XR device 104; and/or from the processor(s) 122 of the other user device(s) 114 in substantially real-time (as used herein “substantially real time” refers to occurrence in a near instantaneous manner (e.g., +/−1 second) recognizing there may be real world delays for computing time, transmission, etc.). In other examples, the XR coach controller 120 receives the sensor data at a later time (e.g., periodically and/or aperiodically based on one or more settings but sometime after the activity that caused the sensor data to be generated, such as movement by the user 102, has occurred (e.g., seconds later)). If the data has not already been processed, the XR coach controller 120 can perform one or more operations on the data from the sensor(s) 112, 116, 118 such as filtering the raw signal data, removing noise from the signal data, and/or converting the signal data from analog to digital data.
In the example of
In the example of
In examples disclosed herein, the XR coach controller 120 implements neural network model(s) to cause the avatar to illustrate ergonomic forms including posture(s), position(s), orientation(s), range(s) of motion, muscle tension level(s), speed(s), etc. for performing movement(s) to promote and/or preserve musculoskeletal integrity. In examples disclosed herein, the ergonomic forms are based on properties of the user 102 and/or other individuals sharing properties with the user 102. For instance, the neural network model(s) can be generated using information for the user 102 and/or other users based on properties such as age, gender, physical body shape, weight, previous medical history (e.g., injuries, conditions such as arthritis). The user properties can be provided as user inputs at one or more of the XR device 104 or the other user device(s) 114. In some examples, the neural network(s) are trained based on image data, position sensor data, and/or strain sensor data generated for the user 102 and/or other users. The neural network model(s) can be generated for particular tasks such as lifting a box, installing a component located overhead, and/or other tasks defined based on the environment 103, the role of the user 102, etc. As a result of the neural network analysis, the XR coach controller 120 controls the avatar to demonstrate reference, correct, or optimal ergonomically form(s) for performing a task that are customized for the user 102.
In the example of
The example XR coach controller 120 of
In some examples, the feedback generated by the XR coach controller 120 includes graphical or visual representation(s) of the user. The graphical representation(s) can illustrate the position(s) of the body part(s) of the user 102 (e.g., an arm of the user 102) relative to corresponding portion(s) of the avatar (e.g., an arm of the avatar) presented via the display 106 of the XR device 104. For instance, a graphical representation of the body of the user can be presented as overlaying an image of the avatar to enable a comparison of the alignment of the user 102 with the avatar in a particular position.
In some examples, the XR coach controller 120 causes a graphical feature of the avatar to be adjusted based on the comparison of the ergonomic form of the user to the reference ergonomic form. For example, the XR coach controller 120 can cause a color of the avatar to change to a first color (e.g., green) to provide visual feedback to the user 102 when the XR coach controller 120 determines that the position(s) of the body part(s) of the user 102 substantially align with corresponding portion(s) of the avatar. The XR coach controller 120 can cause a color of the avatar to change to a second color (e.g., red) to alert the user 102 when the XR coach controller 120 determines that the position(s) of the body part(s) of the user 102 do not substantially align with corresponding portion(s) of the avatar.
The feedback generated by the XR coach controller 120 can additionally or alternatively include other types of feedback involving the avatar. For example, the XR coach controller 120 can cause other types of graphical feedback to be presented via the XR device 104. For instance, the XR coach controller 120 can cause the avatar to perform actions such as clapping in response to a determination based on data from the sensor(s) 112, 116, 118 that the user 102 is performing a movement with correct posture. The XR coach controller 120 can cause other types of content (e.g., video content) to be presented via the XR device 104 to provide feedback to the user 102, such as a check mark that is to be displayed when the XR coach controller 120 determines that the user 102 performed the movement with correct posture.
In some examples, the feedback generated by the XR coach controller 120 includes audio feedback to be output via, for example, the speaker(s) 126 of the extended reality device 104 and/or speaker(s) 128 of the other user device(s) 114. The audio feedback can include instructions with respect to performing the movement(s) (e.g., “bend your knees before lifting the box”) and/or feedback regarding whether the user 102 performed the movement with proper form. Additionally or alternatively, the feedback generated by the XR coach controller 120 can include textual instructions with respect to adjustments to the user's form that are displayed via, for instance, the display 106 of the XR device 104.
In some examples, feedback from the XR coach controller 120 is provided via haptic feedback actuator(s) 124. The haptic feedback actuator(s) 124 can be carried by, for example, the user 102, the XR device 104, the other user device(s) 114 (e.g., a smartphone carried by the user 102), etc. In some examples, the XR coach controller 120 instructs the haptic feedback actuator(s) 124 to provide haptic feedback output(s) (e.g., vibrations) while the avatar is being presented to make the user 102 aware of his or her posture, speed of movement, tension exerted, etc. relative to the avatar. In other examples, the XR coach controller 120 causes the haptic feedback to be output independent of the presentation of the avatar and in response to, for example, detection of movement by the user 102.
In some examples, the other user device(s) 114 include user device(s) (e.g., electronic tablets, smartphones, laptops) associated with the user 102 and/or a third party who is authorized to receive report(s), alert(s), etc. with respect to the analysis of the sensor data, performance of movement(s) of the user 102 relative to the recommendations presented by the avatar, etc. The third party can include, for example, a medical professional. In such examples, the XR coach controller 120 can transmit the data collected by the sensor(s) 112, 116, 118 and/or results of analyses thereof for display at the output device(s) 114. Thus, the authorized third party can track changes in the user 102 and/or other users with respect to ergonomic performance over time.
In the example of
In the example of
The example XR coach controller 120 includes a signal modifier 208. The signal modifier 208 can perform operations to modify the sensor data 200, 202, 204 from the sensor(s) 112, 116, 118 to, for example, filter the data, convert time domain audio data into the frequency domain (e.g., via Fast Fourier Transform (FFT) processing) for spectral analysis, etc. In some examples, the signal modifier 208 processes the sensor data 200, 202, 204 if pre-processing of the data has not otherwise been performed at the XR device 104 and/or the other user device(s) 114. In some examples, the data 201, 202, 204 undergoes modification(s) by the signal modifier 208 before being stored in the database 206.
The example XR coach controller 120 includes an image data analyzer 207. The image data analyzer 207 analyzes the image data 204 generated by the image sensor(s) 118 using image recognition analysis to, for example, identify the user 102 in the image data, detect locations of body part(s) of the user 102, etc. In some examples, the image data analyzer 207 identifies the body part(s) of the user 102 in the image data 204 using keypoint detection, where the keypoints represent joints of the user 102. The results of the image recognition analysis performed by the image data analyzer 207 (e.g., keypoint locations) are stored in the database 206 as image recognition data 209.
In the example of
The user profile data 210 can define task(s) to be performed by the user 102 for which the avatar is to provide instructions with respect to reference or optimal ergonomic movements. For example, the task(s) can be associated with a job of the user 102 such as installing a component while on a ladder, loading a truck with inventory, etc.
In some examples, the database 206 stores population profile data 212. The population profile data 212 can include data associated with individuals in a population sharing one or more characteristics with the user 102, such as an average height of an individual based on gender and age. The population profile data 212 can include average muscle forces exerted by users at different ages, with different health conditions, etc. The population profile data 212 can include average distances between joints in users of a certain height. The population profile data 212 can be defined by user input(s) and can include other types of data than the examples disclosed herein.
In example of
The example XR coach controller 120 of
As disclosed herein, the avatar generated by the avatar generator 216 is controlled to perform movement(s) with reference or optimal ergonomic form(s) to guide the user 102 in performing movements (e.g., in connection with a task) in ergonomically correct form. The example XR coach controller 120 of
Artificial intelligence (AI), including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the model may be trained with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations.
In general, implementing a ML/AI system involves two phases, a learning/training phase and an inference phase. In the learning/training phase, a training algorithm is used to train a model to operate in accordance with patterns and/or associations based on, for example, training data. In general, the model includes internal parameters that guide how input data is transformed into output data, such as through a series of nodes and connections within the model to transform input data into output data. Additionally, hyperparameters are used as part of the training process to control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). Hyperparameters are defined to be training parameters that are determined prior to initiating the training process.
Different types of training may be performed based on the type of ML/AI model and/or the expected output. For example, supervised training uses inputs and corresponding expected (e.g., labeled) outputs to select parameters (e.g., by iterating over combinations of select parameters) for the ML/AI model that reduce model error. As used herein, labelling refers to an expected output of the machine learning model (e.g., a classification, an expected output value, etc.). Alternatively, unsupervised training (e.g., used in deep learning, a subset of machine learning, etc.) involves inferring patterns from inputs to select parameters for the ML/AI model (e.g., without the benefit of expected (e.g., labeled) outputs).
Training is performed using training data. In examples disclosed herein, the training data originates from previously generated sensor data (e.g., user position data, strain sensor data such as EMG data or fabric stretch sensor data, image data of user(s) performing different movement(s), user parameter data (e.g., weight, gender), motion capture sensor data, etc.) for movement associated with reference or correct ergonomic forms. Because supervised training is used, the training data is labeled.
Once training is complete, the model is deployed for use as an executable construct that processes an input and provides an output based on the network of nodes and connections defined in the model. The model(s) are stored at one or more databases (e.g., the database 236 of
Once trained, the deployed model may be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the model, and the model executes to create an output. This inference phase can be thought of as the AI “thinking” to generate the output based on what it learned from the training (e.g., by executing the model to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as an input to the machine learning model. Moreover, in some examples, the output data may undergo post-processing after it is generated by the AI model to transform the output into a useful result (e.g., a display of data, an instruction to be executed by a machine, etc.).
In some examples, output of the deployed model may be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed model can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model.
Referring to
The example first computing system 222 of
The example first computing system 222 of
In the example of
In the example of
The neural network trainer 226 trains the neural network implemented by the neural network processor 224 using the training data 230 to identify reference or optimal ergonomic forms for performing movement(s) that are based on or directed to properties of the user 102. One or more avatar position models 234 are generated as a result of the neural network training. The avatar position model(s) 234 define position(s), posture(s), ranges(s) of motion, speed(s), muscle tension level(s), etc. to be demonstrated by the avatar or digital coach with respect to movement(s) associated with a task (e.g., lifting a box) to promote and/or protect musculoskeletal integrity. The avatar position model(s) 234 are stored in a database 236. The databases 232, 236 may be the same storage device or different storage devices.
The avatar position analyzer 220 executes the avatar position model(s) 234 to generate instructions for controlling an avatar with respect to movement(s) that represent reference or optimal ergonomic form(s) for the user 102 when performing the movement(s). The avatar position model(s) 234 executed by the avatar position analyzer 220 can be selected based on a task to be performed by the user 102 (e.g., as specified in the user profile data 210). The instructions for controlling movement(s) and/or form(s) to be illustrated by the avatar are stored as avatar control instruction(s) 236 in the database 206.
The avatar generator 216 implements the avatar control instruction(s) 236 to cause the avatar to perform the movements with the ergonomic form(s) specified in the instruction(s) 236. For example, the avatar generator 216 can cause the avatar to bend its knees to demonstrate optimal posture for lifting a box based on the instruction(s) 236. The avatar generator 216 communicates with one or more of the processor 108 (e.g., in examples where the XR coach controller 120 is implemented by a different processor) and/or the display controller 110 of the XR device 104 to cause the avatar defined by the avatar property data 218 and illustrating the ergonomic form(s) defined by the instruction(s) 236 to be output by the XR device 104.
In the example of
In some examples, the feedback generator 238 analyzes the user position data 200, the strain sensor data 202, the image data 204, and/or the image recognition data 209 to determine position(s) of one or more portion(s) of the body of the user. For example, the feedback generator 238 can determine an angle at which an arm of the user 102 is disposed when the arm is raised above the user's head based on locations of joints (e.g., keypoints representative of the shoulder joint, the elbow joint, the wrist joint) detected in the image data 204 and stored as the image recognition data 209. In some examples, the feedback generator 238 analyzes image data collected by the image sensor(s) 118 from multiple views to determine position(s) of the body parts of the user 102. As another example, the feedback generator 238 can analyze the user position data 200 to determine locations of the body part(s) of the user 102 relative to a reference location via, for instance, motion capture analysis.
The feedback generator 238 compares the form(s) of the body part(s) of the user 102 to the reference or optimal ergonomic form(s) of the corresponding portion(s) of the avatar as determined by the neural network analysis and defined by the avatar control instructions 236. For example, the feedback generator 238 can map the position(s) of the body part(s) of the user 102 to the reference (e.g., optimal) position(s) of the corresponding body part(s) represented by the avatar. The feedback generator 238 determines if the position(s) of the body part(s) of the user 102 substantially align with the reference position(s) demonstrated by the avatar that correspond to proper ergonomic form. Additionally or alternatively, the feedback generator 238 can compare the position(s) of the body part(s) of the user 102 to the reference position(s) or movement(s) defined in the task reference data 214.
The feedback generator 238 determines if the form(s) of the body part(s) of the user 102 align or substantially align with the corresponding ergonomic form(s) of the avatar based on alignment threshold rule(s) 240. The alignment threshold rule(s) can define, for example, an allowable threshold (e.g., percentage) of a difference between a position of a body part of the user 102 and the reference position such that the form of the user satisfies the reference ergonomic form. The alignment threshold rule(s) 240 can be defined by user input(s).
As another example, the feedback generator 238 can analyze speed(s) at which the user 102 is performing a movement, muscle tension exerted, etc. based on the sensor data 200, 202, 204 to determine differences between the efforts exerted by the user 102 in connection with the movement(s) and the recommended speeds, muscle tension, etc. demonstrated by the avatar. The alignment threshold rule(s) 240 can define corresponding thresholds for comparison.
In the example of
In some examples, the feedback generator 238 generates graphical representation(s) 242 of the user 102 in particular position(s) based on the analysis of the sensor data 200, 202, 204. The feedback generator 238 instructs the display controller 110 of the XR device 104 to present the graphical representation 242 of the user 102 with the avatar. For example, the graphical representation 242 of the user can be illustrated as overlaying the avatar in a corresponding position. The overlaying of the graphical representation 242 and the avatar can provide graphical indications of differences between the form of the user 102 and the reference or optimal ergonomic form demonstrated by the avatar.
In some examples, the feedback generator 238 instructs the display controller 110 of the XR device 104 to output the graphical representations 242 showing positions of the user 102 relative to the avatar over time. Such information can inform the user 102 as to whether his or her form is improving over time with respect to ergonomic performance of the movement(s). In some examples, the feedback generator 238 can instruct the display controller to output image data showing positions of other users relative to the avatar over time based on the population profile data 212 to show the user 102 how the user 102 compares to other users.
Additionally or alternatively, the feedback generator 238 can instruct the XR device 104 and/or the other user device(s) 114 to provide other types of feedback with respect to ergonomic performance of the user 102. For example, the feedback generator 238 can generate audio output(s) informing the user 102 of whether or not he or she is performing the movement correctly (e.g., based on the comparison of the sensor data 200, 202, 204 and/or data 209 derived therefrom to the alignment threshold rule(s) 240) and/or textual instructions as to how to perform a movement for display.
Additionally or alternatively, the feedback generator 238 can generate instructions to provide haptic feedback to the user 102 via the haptic feedback actuator(s) 124. In some examples, the feedback generator 238 generates instructions for haptic feedback to be provided based on the analysis of the movement(s) of the user 102 relative to the avatar to alert the user 102 with respect to, for instance, improper form. In other examples, the feedback generator 238 generates instructions for haptic feedback independent of the presentation of the avatar to serve as a reminder to the user 102 to be alert to as to ergonomic form when performing movement(s). For instance, the feedback generator 238 can instruct the haptic feedback actuator(s) 124 to generate haptic feedback in response to detection of a change in a position of a body part of the user based on analysis of the sensor data 200, 202, 204.
In some examples, the feedback generator 238 transmits data regarding the performance of the user 102 relative to the avatar to user device(s) 114 associated with authorized third parties to enable the third parties to analyze the performance of the user 102 over time. The third party can include, for example, a medical professional. In some examples, the feedback generator 238 transmits the data collected by the sensor(s) 200, 202, 204, the image recognition data 209, and/or the graphical representation(s) 242 of the user 102 overlaying the avatar for display at the output device(s) 114. Thus, the authorized third party can track changes in performance of movement(s) by the user 102 over time.
In the example of
While an example manner of implementing the XR coach controller 120 of
While an example manner of implementing the first computing system 222 is illustrated in
As shown in
In the example of
A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the example first computing system 222 is shown in
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example processes of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
The example instructions 500 begin with the training controller 228 accessing sensor data and/or profile data associated with user(s) and/or population(s) stored in the database 232 (block 502). The sensor data can include, for example, previously generated user position data 200, strain sensor data 202, image data 204, user profile data 210, population profile data 212, task reference data 214. In some examples, the data includes the graphical representation(s) 242 of the user generated by the feedback generator 238 as part of feedback training. In some examples, the sensor data is associated with a particular portion of the body of interest with respect to strain events, such as a shoulder, a knee, a wrist, neck, back, etc.
The example training controller 228 labels the data as with respect to ergonomic form(s) (block 504). For example, when the sensor data includes image data of a user performing a movement, the training controller 228 labels the image(s) corresponding to the user in a position that corresponds to reference (e.g., optimal, proper) ergonomic form with respect to one or more body parts of the user. As another example, the training controller 228 labels muscle strain data with thresholds for detecting muscle tension level(s) exerted by user(s) in certain position(s). In some examples, the data is labeled for a particular user (e.g., the user 102 of
The example training controller 228 generates the training data 230 based on the labeled sensor data (block 506).
The example training controller 228 instructs the neural network trainer 226 to perform training of the neural network 224 using the training data 230 (block 508). In the example of
The example instructions 600 begin with the XR coach controller 120 accessing sensor data and user profile data 210 associated with a user (e.g., the user 102 of
The avatar generator 216 of
The feedback generator 238 of
The feedback generator 238 compares the form(s) 405 of the user 102 to the ergonomic form(s) 301, 403 demonstrated by the avatar (block 612). The feedback generator 238 determines if the user form(s) 405 are substantially aligned with the ergonomic form(s) 301, 403 demonstrated by the avatar within threshold amount(s) defined by the alignment threshold rule(s) 240 (block 614).
The feedback generator 238 generates feedback to be output to the user via the XR device 104 and/or the other user device(s) 114 in response to the analysis of the form(s) of the user relative to the ergonomic form(s) presented by the avatar (blocks 616, 618). The feedback can include graphical representations 242 of the user that overlay the image(s) of the avatar. The feedback provides indications of whether the user is executing proper or improper ergonomic form(s). For example, a color of the avatar can change based on whether the feedback generator 238 determines that the form(s) of the user are substantially aligned with the ergonomic form(s) of the avatar. In addition to or as an alternative to visual feedback, the feedback generated by the feedback generator 238 can include audio output(s) and/or haptic feedback.
The XR coach controller 120 continues to analyze the user's ergonomic form(s) 405 relative to the ergonomic form(s) 301, 403 demonstrated by the avatar as additional sensor data 200, 202, 204 is received by the XR coach controller 120 (block 620). The example instructions 600 of
The processor platform 700 of the illustrated example includes a processor 712. The processor 712 of the illustrated example is hardware. For example, the processor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example neural network processor 224, the example trainer 226, and the example training controller 228.
The processor 712 of the illustrated example includes a local memory 713 (e.g., a cache). The processor 712 of the illustrated example is in communication with a main memory including a volatile memory 714 and a non-volatile memory 716 via a bus 718. The volatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 714, 716 is controlled by a memory controller.
The processor platform 700 of the illustrated example also includes an interface circuit 720. The interface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 722 are connected to the interface circuit 720. The input device(s) 722 permit(s) a user to enter data and/or commands into the processor 712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 724 are also connected to the interface circuit 720 of the illustrated example. The output devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
The interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 700 of the illustrated example also includes one or more mass storage devices 728 for storing software and/or data. Examples of such mass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
Coded instructions 732 of
The processor platform 800 of the illustrated example includes a processor 812. The processor 812 of the illustrated example is hardware. For example, the processor 812 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example signal modifier 208, example image data analyzer 207, the example avatar generator 216, the example avatar position analyzer 220, and the example feedback generator 238.
The processor 812 of the illustrated example includes a local memory 813 (e.g., a cache). The processor 812 of the illustrated example is in communication with a main memory including a volatile memory 814 and a non-volatile memory 816 via a bus 818. The volatile memory 814 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 816 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 814, 816 is controlled by a memory controller.
The processor platform 800 of the illustrated example also includes an interface circuit 820. The interface circuit 820 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 822 are connected to the interface circuit 820. The input device(s) 822 permit(s) a user to enter data and/or commands into the processor 812. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 824 are also connected to the interface circuit 820 of the illustrated example. The output devices 824 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 820 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
The interface circuit 820 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 826. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 800 of the illustrated example also includes one or more mass storage devices 828 for storing software and/or data. Examples of such mass storage devices 828 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
Coded instructions 832 of
From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed that generate an avatar or digital coach that demonstrates reference or optimal ergonomic form for performing movement(s) associated with a task. Examples disclosed herein use extended reality (e.g., augmented reality, mixed reality) to inform a user as to how to perform the task to promote and/or preserve musculoskeletal integrity of to body part(s) (e.g., shoulder, back, legs) of the user. Examples disclosed herein generate an avatar based on properties of the user such as gender, height, etc. Examples disclosed herein perform neural network analysis to determine reference or optimal ergonomic form(s) (e.g., position(s), posture(s), etc.) with respect to movement(s) to be performed by the user in connection with a task and based on the properties of the user and/or similar users. In examples disclosed herein, the avatar is presented via an extended reality device (e.g., augmented reality glasses) and demonstrates how to perform the movement(s) with the ergonomic form(s) determined via the neural network analysis. Examples disclosed herein provide the user with feedback, such as a graphical representation of the user shown with (e.g., overlaying) the avatar, to inform the user as to the quality of his or her form relative to the optimal ergonomic forms illustrated by the avatar.
Example extended reality systems, apparatus, and methods for musculoskeletal ergonomic improvement are disclosed herein. Further examples and combinations thereof include the following
Clause 1 includes an apparatus including an avatar generator to generate an avatar based one or more properties of a user; an avatar position analyzer to determine a first ergonomic form for a movement based on the one or more properties of the user, the avatar generator to cause an output device to display the avatar in the first ergonomic form; and a feedback generator to determine a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user; generate a graphical representation of the user in the second form; and cause the output device to display the graphical representation of the user with the avatar.
Clause 2 includes the apparatus of clause 1, wherein the sensor data includes position data for one or more body parts of the user.
Clause 3 includes the apparatus of clauses 1 or 2, wherein the sensor data includes image data including the user.
Clause 4 includes the apparatus of any of clauses 1-3, wherein the avatar position analyzer is to execute a neural network model to determine the first ergonomic form.
Claus 5 includes the apparatus of any of clauses 1-4, wherein the first ergonomic form includes a position of a body part of the user.
Clause 6 includes the apparatus of any of clauses 1-5, wherein the first ergonomic form includes a muscle tension level.
Clause 7 includes the apparatus of any of clauses 1-6, wherein the sensor data includes strain sensor data.
Clause 8 includes the apparatus of any of clauses 1-7, wherein the feedback generator is to cause the output device or a second output device to output haptic feedback in response to the determination of the second form.
Claus 9 includes the apparatus of any of clauses 1-8, wherein the feedback generator is to perform a comparison the first ergonomic form to the second form and cause a graphical feature of the avatar to be adjusted based on the comparison.
Clause 10 includes the apparatus of any of clauses 1-9, wherein the graphical feature includes a color of the avatar.
Clause 11 includes the apparatus of any of clauses 1-10, wherein the feedback generator is to cause the output device to display the graphical representation of the user as overlaying the avatar.
Clause 12 includes a system including a first sensor; and an extended reality coach controller to execute a neural network model to generate an avatar illustrating a first ergonomic position for a movement; cause an extended reality device to output the avatar; determine a second position of a body part of a user based on first sensor data generated by the first sensor; perform a comparison of the first ergonomic position and the second position; and cause the extended reality device to output graphical feedback based on the comparison.
Clause 13 includes the system of clause 12, wherein the first sensor includes an image sensor and the first sensor data includes image data including the user.
Clause 14 includes the system of clauses 12 or 13, wherein the extended reality coach controller is to generate the avatar based on a property of the user.
Clause 15 includes the system of any of clauses 12-14, wherein the graphical feedback includes a graphical representation of the user in the second position.
Clause 16 includes the system of any of clauses 12-15, wherein the extended reality coach controller is to cause the extended reality device to output the graphical representation of the user as overlaying an image of the avatar.
Clause 17 includes the system of any of clauses 12-16, wherein the extended reality coach controller is to cause a haptic feedback actuator to generate a haptic feedback output based on the comparison.
Clause 18 includes a non-transitory computer readable medium comprising instructions that, when executed by at least one processor, cause the at least one processor to generate an avatar based one or more properties of a user; determine a first ergonomic form for a movement based on the one or more properties of the user; cause an output device to display the avatar in the first ergonomic form; determine a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user; generate a graphical representation of the user in the second form; and cause the output device to display the graphical representation of the user with the avatar.
Clause 19 includes the non-transitory computer readable medium of clause 18, wherein the sensor data includes position data for one or more body parts of the user.
Clause 20 includes the non-transitory computer readable medium of clauses 18 or 19, wherein the sensor data includes image data including the user.
Clause 21 includes the non-transitory computer readable medium of any of clauses 18-20, wherein the instructions, when executed, cause the at least one processor to execute a neural network model to determine the first ergonomic form.
Clause 22 includes the non-transitory computer readable medium of any of clauses 18-21, wherein the first ergonomic form includes a position of a body part of the user.
Clause 23 includes the non-transitory computer readable medium of any of clauses 18-22, wherein the instructions, when executed, cause the at least one processor to cause the output device or a second output device to output haptic feedback in response to the determination of the second form.
Clause 24 includes the non-transitory computer readable medium of any of clauses 18-23, wherein the instructions, when executed, cause the at least one processor to perform a comparison the first ergonomic form to the second form and cause a graphical feature of the avatar to be adjusted based on the comparison.
Clause 25 includes the non-transitory computer readable medium of any of clauses 18-24, wherein the graphical feature includes a color of the avatar.
Clause 26 includes the non-transitory computer readable medium of any of clauses 18-25, wherein the instructions, when executed, cause the at least one processor to cause the output device to display the graphical representation of the user as overlaying the avatar.
Clause 27 includes a method including generating an avatar based one or more properties of a user; determining a first ergonomic form for a movement based on the one or more properties of the user; causing an output device to display the avatar in the first ergonomic form; determining a second form associated with movement of the user based on sensor data collected via one or more sensors associated with the user; generating a graphical representation of the user in the second form; and causing the output device to display the graphical representation of the user with the avatar.
Clause 28 includes the method of clause 27, wherein the sensor data includes position data for one or more body parts of the user.
Clause 29 includes the method of clauses 27 or 28, wherein the sensor data includes image data including the user.
Clause 30 includes the method of any of clauses 27-29, wherein determining the first ergonomic form includes executing a neural network model to determine the first ergonomic form.
Clause 31 includes the method of any of clauses 27-30, wherein the first ergonomic form includes a position of a body part of the user.
Clause 32 includes the method of any of clauses 27-31, further including causing the output device or a second output device to output haptic feedback in response to the determination of the second form.
Clause 33 includes the method of any of clauses 27-32, further including performing a comparison the first ergonomic form to the second form and causing a graphical feature of the avatar to be adjusted based on the comparison.
Clause 34 includes the method of any of clauses 27-33, wherein the graphical feature includes a color of the avatar.
Clause 35 includes the method of any of clauses 27-34, further including causing the output device to display the graphical representation of the user as overlaying the avatar.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.