The specification generally relates to tracking physical activity of a user performing exercise movements and providing feedback and recommendations relating to performing the exercise movements. In particular, the specification relates to a system and method for actively tracking physical performance of exercise movements by a user, analyzing the physical performance of the exercise movements using machine learning algorithms, and providing feedback and recommendations to the user.
Physical exercise is considered by many to be a beneficial activity. Existing digital fitness solutions in the form of mobile applications help users by guiding them through a workout routine and logging their efforts. Such mobile applications may also be paired with wearable devices logging heart rate, energy expenditure, and movement pattern. However, they are limited to tracking a narrow subset of physical exercises such as cycling, running, rowing, etc. Also, existing digital fitness solutions cannot match the engaging environment and effective direction provided by personal trainers at gyms. Personal trainers are not easily accessible, convenient or affordable to many potential users. It is important for a digital fitness solution to address the requirements relating to personalized training, tracking physical performance of exercise movements, and intelligently provide feedback and recommendation to users that benefit and advances their fitness goals.
This background description provided herein is for the purpose of generally presenting the context of the disclosure.
The techniques introduced herein overcome the deficiencies and limitations of the prior art at least in part by providing systems and methods for tracking physical activity of a user performing exercise movements and providing feedback and recommendations relating to performing the exercise movements.
According to one innovative aspect of the subject matter described in this disclosure, a method for providing feedback in real-time in association with a user performing an exercise movement is provided. The method includes: receiving a stream of sensor data in association with a user performing an exercise movement over a period of time; processing the stream of sensor data; detecting, using a first classifier on the processed stream of sensor data, one or more poses of the user performing the exercise movement; determining, using a second classifier on the one or more detected poses, a classification of the exercise movement and one or more repetitions of the exercise movement; determining, using a third classifier on the one or more detected poses and the one or more repetitions of the exercise movement, feedback including a score for the one or more repetitions, the score indicating an adherence to predefined conditions for correctly performing the exercise movement; and presenting the feedback in real-time in association with the user performing the exercise movement.
According to another innovative aspect of the subject matter described in this disclosure, a system for providing feedback in real-time in association with a user performing an exercise movement is provided. The system includes: one or more processors; a memory storing instructions, which when executed cause the one or more processors to: receive a stream of sensor data in association with a user performing an exercise movement over a period of time; process the stream of sensor data; detect, using a first classifier on the processed stream of sensor data, one or more poses of the user performing the exercise movement; determine, using a second classifier on the one or more detected poses, a classification of the exercise movement and one or more repetitions of the exercise movement; determine, using a third classifier on the one or more detected poses and the one or more repetitions of the exercise movement, feedback including a score for the one or more repetitions, the score indicating an adherence to predefined conditions for correctly performing the exercise movement; and present the feedback in real-time in association with the user performing the exercise movement.
These and other implementations may each optionally include one or more of the following operations. For instance, the operations may include: determining, using a fifth classifier on the one or more repetitions of the exercise movement, a current level of fatigue for the user performing the exercise movement; generating a recommendation for the user performing the exercise movement based on the current level of fatigue; and presenting the recommendation in association with the user performing the exercise movement. Additionally, these and other implementations may each optionally include one or more of the following features. For instance, the features may include: detecting the one or more poses of the user performing the exercise movement comprising detecting a change in a pose of the user from a first pose to a second pose in association with performing the exercise movement; determining the classification of the exercise movement comprising identifying, using a fourth classifier on the one or more detected poses, an exercise equipment used in association with performing the exercise movement, and determining the classification of the exercise movement based on the exercise equipment; presenting the feedback in real-time in association with the user performing the exercise movement comprising determining data including acceleration, spatial location, and orientation of the exercise equipment in the exercise movement using the processed stream of sensor data, determining an actual motion path of the exercise equipment relative to the user based on the acceleration, the spatial location, and the orientation of the exercise equipment, determining whether a difference between the actual motion path and a correct motion path for performing the exercise movement satisfies a threshold, and responsive to determining that the difference between the actual motion path and the correct motion path for performing the exercise movement satisfies the threshold, presenting an overlay of the correct motion path to guide the exercise movement of the user toward the correct motion path; the recommendation comprising one or more of a set amount of weight to push or pull, a number of repetitions to perform, a set amount of weight to increase on an exercise movement, a set amount of weight to decrease on an exercise movement, a change in an order of exercise movements, increase a speed of an exercise movement, decrease the speed of an exercise movement, an alternative exercise movement, and a next exercise movement; the stream of sensor data comprising one or more of a first set of sensor data from an inertial measurement unit (IMU) sensor integrated with one or more exercise equipment in motion, a second set of sensor data from one or more wearable computing devices capturing physiological measurements associated with the user, and a third set of sensor data from an interactive personal training device capturing data including one or more image frames of the user performing the exercise movement; the feedback comprising one or more of heart rate, heart rate variability, a real-time count of the one or more repetitions of the exercise movement, a duration of rest, a duration of activity, a detection of use of an exercise equipment, and an amount of weight moved by the user performing the exercise movement; the exercise movement being one of bodyweight exercise movement, isometric exercise movement, and weight equipment-based exercise movement; and presenting the feedback in real-time in association with the user performing the exercise movement comprising displaying the feedback on an interactive screen of an interactive personal training device.
Other implementations of one or more of these aspects and other aspects include corresponding systems, apparatus, and computer programs, configured to perform the various action and/or store various data described in association with these aspects. Numerous additional features may be included in these and various other implementations, as discussed throughout this disclosure.
The features and advantages described herein are not all-inclusive and many additional features and advantages will be apparent in view of the figures and description. Moreover, it should be understood that the language used in the present disclosure has been principally selected for readability and instructional purposes, and not to limit the scope of the subject matter disclosed herein.
The techniques introduced herein are illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals are used to refer to similar elements.
The network 105 may be a conventional type, wired or wireless, and may have numerous different configurations including a star configuration, token ring configuration, or other configurations. Furthermore, the network 105 may include any number of networks and/or network types. For example, the network 105 may include a local area network (LAN), a wide area network (WAN) (e.g., the Internet), virtual private networks (VPNs), mobile (cellular) networks, wireless wide area network (WWANs), WiMAX® networks, Bluetooth® communication networks, peer-to-peer networks, and/or other interconnected data paths across which multiple devices may communicate, various combinations thereof, etc. The network 105 may also be coupled to or include portions of a telecommunications network for sending data in a variety of different communication protocols. In some embodiments, the network 105 may include Bluetooth communication networks or a cellular communications network for sending and receiving data including via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, WAP, email, etc. In some implementations, the data transmitted by the network 105 may include packetized data (e.g., Internet Protocol (IP) data packets) that is routed to designated computing devices coupled to the network 105. Although
The client devices 130a . . . 130n (also referred to individually and collectively as 130) may be computing devices having data processing and communication capabilities. In some implementations, a client device 130 may include a memory, a processor (e.g., virtual, physical, etc.), a power source, a network interface, software and/or hardware components, such as a display, graphics processing unit (GPU), wireless transceivers, keyboard, camera (e.g., webcam), sensors, firmware, operating systems, web browsers, applications, drivers, and various physical connection interfaces (e.g., USB, HDMI, etc.). The client devices 130a . . . 130n may couple to and communicate with one another and the other entities of the system 100 via the network 105 using a wireless and/or wired connection. Examples of client devices 130 may include, but are not limited to, laptops, desktops, tablets, mobile phones (e.g., smartphones, feature phones, etc.), server appliances, servers, virtual machines, smart TVs, media streaming devices, user wearable computing devices (e.g., fitness trackers) or any other electronic device capable of accessing a network 105. While two or more client devices 130 are depicted in
The interactive personal training devices 108a . . . 108n may be computing devices with data processing and communication capabilities. In the example of
The set of equipment 134 may include equipment used in the performance of exercise movements. Examples of such equipment may include, but not limited to, dumbbells, barbells, weight plates, medicine balls, kettlebells, sandbags, resistance bands, jump rope, abdominal exercise roller, pull up bar, ankle weights, wrist weights, weighted vest, plyometric box, fitness stepper, stair climber, rowing machine, smith machine, cable machine, stationary bike, stepping machine, etc. The set of equipment 134 may include etchings denoting the associated weight in kilograms or pounds. In some implementations, an inertial measurement unit (IMU) sensor 132 may be embedded into a surface of the equipment 134. In some implementations, the IMU sensor 132 may be attached to the surface of the equipment 134 using an adhesive. In some implementations, the IMU sensor 132 may be inconspicuously integrated into the equipment 134. The IMU sensor 132 may be a wireless IMU sensor that is configured to be rechargeable. The IMU sensor 132 comprises multiple inertial sensors (e.g., accelerometer, gyroscope, magnetometer, barometric pressure sensor, etc.) to record comprehensive inertial parameters (e.g., motion force, position, velocity, acceleration, orientation, pressure etc.) of the equipment 134 in motion during the performance of exercise movements. The IMU sensor 132 on the equipment 134 is communicatively coupled with the interactive personal training device 108 and is calibrated with the orientation, associated equipment type, and actual weight value (kg/lbs) of the equipment 134. This enables the interactive personal training device 108 to accurately detect and track acceleration, weight volume, equipment in use, equipment trajectory, and spatial location in three-dimensional space. The IMU sensor 132 is operable for data transmission via Bluetooth® or Bluetooth Low Energy (BLE). The IMU sensor 132 uses a passive connection instead of active pairing with the interactive personal training device 108 to improve data transfer reliability and latency. For example, the IMU sensor 132 records sensor data for transmission to the interactive personal training device 108 only when accelerometer readings indicate the user is moving the equipment 134. In some implementations, the equipment 134 may incorporate a haptic device to create haptic feedback including vibrations or a rumble in the equipment 134. For example, the equipment 134 may be configured to create vibrations to indicate to the user a completion of one repetition of an exercise movement.
Also, instead of or in addition to the IMU sensor 132, the set of equipment 134 may be embedded with one or more of radio-frequency identification (RFID) tags for transmitting digital identification data (e.g., equipment type, weight, etc.) when triggered by an electromagnetic interrogation pulse from a RFID reader on the interactive personal training device 108 and machine-readable markings or labels, such as a barcode, a quick response (QR) code, etc. for transmitting identifying information about the equipment 134 when scanned and decoded by built-in cameras in the interactive personal training device 108. In some other implementations, the set of equipment 134 may be coated with a color marker that appears as a different color in nonvisible light enabling the interactive personal training device 108 to distinguish between different equipment type and/or weights. For example, a 20 pound dumbbell appearing black in visible light may appear pink to an infrared (IR) camera associated with the interactive personal training device 108.
Each of the plurality of third-party servers 140 may be, or may be implemented by, a computing device including a processor, a memory, applications, a database, and network communication capabilities. A third-party server 140 may be a Hypertext Transfer Protocol (HTTP) server, a Representational State Transfer (REST) service, or other server type, having structure and/or functionality for processing and satisfying content requests and/or receiving content from one or more of the client devices 130, the interactive personal training devices 108, and the personal training backend server 120 that are coupled to the network 105. In some implementations, the third-party server 140 may include an online service 111 dedicated to providing access to various services and information resources hosted by the third-party server 140 via web, mobile, and/or cloud applications. The online service 111 may obtain and store user data, content items (e.g., videos, text, images, etc.), and interaction data reflecting the interaction of users with the content items. User data, as described herein, may include one or more of user profile information (e.g., user id, user preferences, user history, social network connections, etc.), logged information (e.g., heart rate, activity metrics, sleep quality data, calories and nutrient data, user device specific information, historical actions, etc.), and other user specific information. In some embodiments, the online service 111 allows users to share content with other users (e.g., friends, contacts, public, similar users, etc.), purchase and/or view items (e.g., e-books, videos, music, games, gym merchandise, subscription, etc.), and other similar actions. For example, the online service 111 may provide various services such as physical fitness service; running and cycling tracking service; music streaming service; video streaming service; web mapping service; multimedia messaging service; electronic mail service; news service; news aggregator service; social networking service; photo and video-sharing social networking service; sleep-tracking service; diet-tracking and calorie counting service; ridesharing service; online banking service; online information database service; travel service; online e-commerce marketplace; ratings and review service; restaurant-reservation service; food delivery service; search service; health and fitness service; home automation and security service; Internet of Things (IOT), multimedia hosting, distribution, and sharing service; cloud-based data storage and sharing service; a combination of one or more of the foregoing services; or any other service where users retrieve, collaborate, and/or share information, etc. It should be noted that the list of items provided as examples for the online service 111 above are not exhaustive and that others are contemplated in the techniques described herein.
In some implementations, a third-party server 140 sends and receives data to and from other entities of the system 100 via the network 105. In the example of
In the example of
In some implementations, the personal training backend server 120 may be operable to enable the users 106a . . . 106n of the interactive personal training devices 108a . . . 108n to create and manage individual user accounts; receive, store, and/or manage functional fitness programs created by the users; enhance the functional fitness programs with trained machine learning algorithms; share the functional fitness programs with subscribed users in the form of live and/or on-demand classes via the interactive personal training devices 108a . . . 108n; and track, analyze, and provide feedback using trained machine learning algorithms on the exercise movements performed by the users as appropriate, etc. The personal training backend server 120 may send data to and receive data from the other entities of the system 100 including the client devices 130, the interactive personal training devices 108, and third-party servers 140 via the network 105. It should be understood that the personal training backend server 120 is not limited to providing the above-noted acts and/or functionality and may include other network-accessible services. In addition, while a single personal training backend server 120 is depicted in
The personal training application 110 may include software and/or logic to provide the functionality for tracking physical activity of a user performing exercise movements and providing feedback and recommendations relating to performing the exercise movements. In some implementations, the personal training application 110 may be implemented using programmable or specialized hardware, such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). In some implementations, the personal training application 110 may be implemented using a combination of hardware and software. In other implementations, the personal training application 110 may be stored and executed on a combination of the interactive personal training devices 108 and the personal training backend server 120, or by any one of the interactive personal training devices 108 or the personal training backend server 120.
In some implementations, the personal training application 110 may be a thin-client application with some functionality executed on the interactive personal training device 108a (by the personal training application 110a) and additional functionality executed on the personal training backend server 120 (by the personal training application 110b). For example, the personal training application 110a may be storable in a memory (e.g., see
In some embodiments, the personal training application 110 may generate and present various user interfaces to perform these acts and/or functionality, which may in some cases be based at least in part on information received from the personal training backend server 120, the client device 130, the interactive personal training device 108, the set of equipment 134, and/or one or more of the third-party servers 140 via the network 105. Non-limiting example user interfaces that may be generated for display by the personal training application 110 are depicted in
In some implementations, the personal training application 110 may require users to be registered with the personal training backend server 120 to access the acts and/or functionality described herein. For example, to access various acts and/or functionality provided by the personal training application 110, the personal training application 110 may require a user to authenticate his/her identity. For example, the personal training application 110 may require a user seeking access to authenticate their identity by inputting credentials in an associated user interface. In another example, the personal training application 110 may interact with a federated identity server (not shown) to register and/or authenticate the user by scanning and verifying biometrics including facial attributes, fingerprint, and voice.
It should be understood that the system 100 illustrated in
The processor 235 may execute software instructions by performing various input/output, logical, and/or mathematical operations. The processor 235 may have various computing architectures to process data signals including, for example, a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, and/or an architecture implementing a combination of instruction sets. The processor 235 may be physical and/or virtual, and may include a single processing unit or a plurality of processing units and/or cores. In some implementations, the processor 235 may be capable of generating and providing electronic display signals to a display device 239, supporting the display of images, capturing and transmitting images, and performing complex tasks including various types of feature extraction and sampling. In some implementations, the processor 235 may be coupled to the memory 237 via the bus 220 to access data and instructions therefrom and store data therein. The bus 220 may couple the processor 235 to the other components of the computing device 200 including, for example, the memory 237, the communication unit 241, the display device 239, the input/output device(s) 247, the sensor(s) 249, and the data storage 243. In some implementations, the processor 235 may be coupled to a low-power secondary processor (e.g., sensor hub) included on the same integrated circuit or on a separate integrated circuit. This secondary processor may be dedicated to performing low-level computation at low power. For example, the secondary processor may perform sensor fusion, sensor batching, etc. in accordance with the instructions received from the personal training application 110.
The memory 237 may store and provide access to data for the other components of the computing device 200. The memory 237 may be included in a single computing device or distributed among a plurality of computing devices as discussed elsewhere herein. In some implementations, the memory 237 may store instructions and/or data that may be executed by the processor 235. The instructions and/or data may include code for performing the techniques described herein. For example, as depicted in
The memory 237 may include one or more non-transitory computer-usable (e.g., readable, writeable) device, a static random access memory (SRAM) device, a dynamic random access memory (DRAM) device, an embedded memory device, a discrete memory device (e.g., a PROM, FPROM, ROM), a hard disk drive, an optical disk drive (CD, DVD, Blu-ray™, etc.) mediums, which can be any tangible apparatus or device that can contain, store, communicate, or transport instructions, data, computer programs, software, code, routines, etc., for processing by or in connection with the processor 235. In some implementations, the memory 237 may include one or more of volatile memory and non-volatile memory. It should be understood that the memory 237 may be a single device or may include multiple types of devices and configurations.
The bus 220 may represent one or more buses including an industry standard architecture (ISA) bus, a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or some other bus providing similar functionality. The bus 220 may include a communication bus for transferring data between components of the computing device 200 or between computing device 200 and other components of the system 100 via the network 105 or portions thereof, a processor mesh, a combination thereof, etc. In some implementations, the personal training application 110 and various other software operating on the computing device 200 (e.g., an operating system 107, device drivers, etc.) may cooperate and communicate via a software communication mechanism implemented in association with the bus 220. The software communication mechanism may include and/or facilitate, for example, inter-process communication, local function or procedure calls, remote procedure calls, an object broker (e.g., CORBA), direct socket communication (e.g., TCP/IP sockets) among software modules, UDP broadcasts and receipts, HTTP connections, etc. Further, any or all of the communication may be configured to be secure (e.g., SSH, HTTPS, etc.).
The display device 239 may be any conventional display device, monitor or screen, including but not limited to, a liquid crystal display (LCD), light emitting diode (LED), organic light-emitting diode (OLED) display or any other similarly equipped display device, screen or monitor. The display device 239 represents any device equipped to display user interfaces, electronic images, and data as described herein. In some implementations, the display device 239 may output display in binary (only two different values for pixels), monochrome (multiple shades of one color), or multiple colors and shades. The display device 239 is coupled to the bus 220 for communication with the processor 235 and the other components of the computing device 200. In some implementations, the display device 239 may be a touch-screen display device capable of receiving input from one or more fingers of a user. For example, the display device 239 may be a capacitive touch-screen display device capable of detecting and interpreting multiple points of contact with the display surface. In some implementations, the computing device 200 (e.g., interactive personal training device 108) may include a graphics adapter (not shown) for rendering and outputting the images and data for presentation on display device 239. The graphics adapter (not shown) may be a separate processing device including a separate processor and memory (not shown) or may be integrated with the processor 235 and memory 237.
The input/output (I/O) device(s) 247 may include any standard device for inputting or outputting information and may be coupled to the computing device 200 either directly or through intervening I/O controllers. In some implementations, the input device 247 may include one or more peripheral devices. Non-limiting example I/O devices 247 include a touch screen or any other similarly equipped display device equipped to display user interfaces, electronic images, and data as described herein, a touchpad, a keyboard, a scanner, a stylus, light emitting diode (LED) indicators or strips, an audio reproduction device (e.g., speaker), an audio exciter, a microphone array, a barcode reader, an eye gaze tracker, a sip-and-puff device, and any other I/O components for facilitating communication and/or interaction with users. In some implementations, the functionality of the input/output device 247 and the display device 239 may be integrated, and a user of the computing device 200 (e.g., interactive personal training device 108) may interact with the computing device 200 by contacting a surface of the display device 239 using one or more fingers. For example, the user may interact with an emulated (i.e., virtual or soft) keyboard displayed on the touch-screen display device 239 by using fingers to contact the display in the keyboard regions.
The capture device 245 may be operable to capture an image (e.g., an RGB image, a depth map), a video or data digitally of an object of interest. For example, the capture device 245 may be a high definition (HD) camera, a regular 2D camera, a multi-spectral camera, a structured light 3D camera, a time-of-flight 3D camera, a stereo camera, a standard smartphone camera, a barcode reader, an RFID reader, etc. The capture device 245 is coupled to the bus to provide the images and other processed metadata to the processor 235, the memory 237, or the data storage 243. It should be noted that the capture device 245 is shown in
The sensor(s) 249 includes any type of sensors suitable for the computing device 200. The sensor(s) 249 are communicatively coupled to the bus 220. In the context of the interactive personal training device 108, the sensor(s) 249 may be configured to collect any type of signal data suitable to determine characteristics of its internal and external environments. Non-limiting examples of the sensor(s) 249 include various optical sensors (CCD, CMOS, 2D, 3D, light detection and ranging (LiDAR), cameras, etc.), audio sensors, motion detection sensors, magnetometer, barometers, altimeters, thermocouples, moisture sensors, infrared (IR) sensors, radar sensors, other photo sensors, gyroscopes, accelerometers, geo-location sensors, orientation sensor, wireless transceivers (e.g., cellular, Wi-Fi™, near-field, etc.), sonar sensors, ultrasonic sensors, touch sensors, proximity sensors, distance sensors, microphones, etc. In some implementations, one or more sensors 249 may include externally facing sensors provided at the front side, rear side, right side, and/or left side of the interactive personal training device 108 in order to capture the environment surrounding the interactive personal training device 108. In some implementations, the sensor(s) 249 may include one or more image sensors (e.g., optical sensors) configured to record images including video images and still images, may record frames of a video stream using any applicable frame rate, and may encode and/or process the video and still images captured using any applicable methods. In some implementations, the image sensor(s) 249 may capture images of surrounding environments within their sensor range. For example, in the context of an interactive personal training device 108, the sensors 249 may capture the environment around the interactive personal training device 108 including people, ambient light (e.g., day or night time), ambient sound, etc. In some implementations, the functionality of the capture device 245 and the sensor(s) 249 may be integrated. It should be noted that the sensor(s) 249 is shown in
The communication unit 241 is hardware for receiving and transmitting data by linking the processor 235 to the network 105 and other processing systems via signal line 104. The communication unit 241 receives data such as requests from the interactive personal training device 108 and transmits the requests to the personal training application 110, for example a request to start a workout session. The communication unit 241 also transmits information including media to the interactive personal training device 108 for display, for example, in response to the request. The communication unit 241 is coupled to the bus 220. In some implementations, the communication unit 241 may include a port for direct physical connection to the interactive personal training device 108 or to another communication channel. For example, the communication unit 241 may include an RJ45 port or similar port for wired communication with the interactive personal training device 108. In other implementations, the communication unit 241 may include a wireless transceiver (not shown) for exchanging data with the interactive personal training device 108 or any other communication channel using one or more wireless communication methods, such as IEEE 802.11, IEEE 802.16, Bluetooth® or another suitable wireless communication method.
In yet other implementations, the communication unit 241 may include a cellular communications transceiver for sending and receiving data over a cellular communications network such as via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, WAP, e-mail or another suitable type of electronic communication. In still other implementations, the communication unit 241 may include a wired port and a wireless transceiver. The communication unit 241 also provides other conventional connections to the network 105 for distribution of files and/or media objects using standard network protocols such as TCP/IP, HTTP, HTTPS, and SMTP as will be understood to those skilled in the art.
The data storage 243 is a non-transitory memory that stores data for providing the functionality described herein. In some embodiments, the data storage 243 may be coupled to the components 235, 237, 239, 241, 245, 247, and 249 via the bus 220 to receive and provide access to data. In some embodiments, the data storage 243 may store data received from other elements of the system 100 include, for example, the API 136 in servers 140 and/or the personal training applications 110, and may provide data access to these entities. The data storage 243 may store, among other data, user profiles 222, training datasets 224, machine learning models 226, and workout programs 228.
The data storage 243 may be included in the computing device 200 or in another computing device and/or storage system distinct from but coupled to or accessible by the computing device 200. The data storage 243 may include one or more non-transitory computer-readable mediums for storing the data. In some implementations, the data storage 243 may be incorporated with the memory 237 or may be distinct therefrom. The data storage 243 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory, or some other memory devices. In some implementations, the data storage 243 may include a database management system (DBMS) operable on the computing device 200. For example, the DBMS could include a structured query language (SQL) DBMS, a NoSQL DMBS, various combinations thereof, etc. In some instances, the DBMS may store data in multi-dimensional tables comprised of rows and columns, and manipulate, e.g., insert, query, update and/or delete, rows of data using programmatic operations. In other implementations, the data storage 243 also may include a non-volatile memory or similar permanent storage device and media including a hard disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM device, a DVD-RW device, a flash memory device, or some other mass storage device for storing information on a more permanent basis.
It should be understood that other processors, operating systems, sensors, displays, and physical configurations are possible.
As depicted in
The operating system 107, stored on memory 237 and configured to be executed by the processor 235, is a component of system software that manages hardware and software resources in the computing device 200. The operating system 107 includes a kernel that controls the execution of the personal training application 110 by managing input/output requests from the personal training application 110. The personal training application 110 requests a service from the kernel of the operating system 107 through system calls. In addition, the operating system 107 may provide scheduling, data management, memory management, communication control and other related services. For example, the operating system 107 is responsible for recognizing input from a touch screen, sending output to a display screen, tracking files on the data storage 243, and controlling peripheral devices (e.g., Bluetooth® headphones, equipment 134 integrated with an IMU sensor 132, etc.). In some implementations, the operating system 107 may be a general-purpose operating system. For example, the operating system 107 may be Microsoft Windows®, Mac OS® or UNIX® based operating system. Or the operating system 107 may be a mobile operating system, such as Android®, iOS® or Tizen™. In other implementations, the operating system 107 may be a special-purpose operating system. The operating system 107 may include other utility software or system software to configure and maintain the computing device 200.
In some implementations, the personal training application 110 may include a personal training engine 202, a data processing engine 204, a machine learning engine 206, a feedback engine 208, a recommendation engine 210, a gamification engine 212, a program enhancement engine 214, and a user interface engine 216. The components 202, 204, 206, 208, 210, 212, 214, and 216 may be communicatively coupled by the bus 220 and/or the processor 235 to one another and/or the other components 237, 239, 241, 243, 245, 247, and 249 of the computing device 200 for cooperation and communication. The components 202, 204, 206, 208, 210, 212, 214, and 216 may each include software and/or logic to provide their respective functionality. In some implementations, the components 202, 204, 206, 208, 210, 212, 214, and 216 may each be implemented using programmable or specialized hardware including a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). In some implementations, the components 202, 204, 206, 208, 210, 212, 214, and 216 may each be implemented using a combination of hardware and software executable by the processor 235. In some implementations, each one of the components 202, 204, 206, 208, 210, 212, 214, and 216 may be sets of instructions stored in the memory 237 and configured to be accessible and executable by the processor 235 to provide their acts and/or functionality. In some implementations, the components 202, 204, 206, 208, 210, 212, 214, and 216 may send and receive data, via the communication unit 241, to and from one or more of the client devices 130, the interactive personal training devices 108, the personal training backend server 120 and third-party servers 111.
The personal training engine 202 may include software and/or logic to provide functionality for creating and managing user profiles 222 and selecting one or more workout programs for users of the interactive personal training device 108 based on the user profiles 222. In some implementations, the personal training engine 202 receives a user profile from a user's social network account with permission from the user. For example, the personal training engine 202 may access an API 136 of a third-party social network server 140 to request a basic user profile to serve as a starter profile. The user profile received from the third-party social network server 140 may include one or more of the user's age, gender, interests, location, and other demographic information. The personal training engine 202 may receive information from other components of the personal training application 110 and use the received information to update the user profile 222 accordingly. For example, the personal training engine 202 may receive information including performance statistics of the user participation in a full body workout session from the feedback engine 208 and update the workout history portion in the user profile 222 using the received information. In another example, the personal training engine 202 may receive achievement badges that the user earned after reaching one or more milestones from the gamification engine 212 and accordingly associate the badges with the user profile 222.
In some implementations, the user profile 222 may include additional information about the user including name, age, gender, height, weight, profile photo, 3D body scan, training preferences (e.g. HIIT, Yoga, barbell powerlifting, etc.), fitness goals (e.g., gain muscle, lose fat, get lean, etc.), fitness level (e.g., beginner, novice, advanced, etc.), fitness trajectory (e.g., losing 0.5% body fat monthly, increasing bicep size by 0.2 centimeters monthly, etc.), workout history (e.g., frequency of exercise, intensity of exercise, total rest time, average time spent in recovery, average time spent in active exercise, average heart rate, total exercise volume, total weight volume, total time under tension, one-repetition maximum, etc.), activities (e.g. personal training sessions, workout program subscriptions, indications of approval, multi-user communication sessions, purchase history, synced wearable devices, synced third-party applications, followers, following, etc.), video and audio of performing exercises, and profile rating and badges (e.g., strength rating, achievement badges, etc.). The personal training engine 202 stores and updates the user profiles 222 in the data storage 243.
The data processing engine 204 may include software and/or logic to provide functionality for receiving and processing a sensor data stream from a plurality of sensors focused on monitoring the movements, position, activities, and interactions of one or more users of the interactive personal training device 108. The data processing engine 204 receives a first set of sensor data from the sensor(s) 249 of the interactive personal training device 108. For example, the first set of sensor data may include one or more image frames, video, depth map, audio, and other sensor data capturing the user performing an exercise movement in a private or semi-private space. The data processing engine 204 receives a second set of sensor data from an inertial measurement unit (IMU) sensor 132 associated with an equipment 134 in use. For example, the second set of sensor data may include physical motion parameters, such as acceleration, velocity, position, orientation, rotation etc. of the equipment 134 used by the user in association with performing the exercise movement. The data processing engine 204 receives a third set of sensor data from sensors available in one or more wearable devices in association with the user performing the exercise movement. For example, the third set of sensor data may include physiological, biochemical, and environmental sensor signals, such as heart rate (pulse), heart rate variability, oxygen level, glucose, blood pressure, temperature, respiration rate, cutaneous water (sweat, salt secretion), saliva biomarkers, calories burned, eye tracking, etc. captured using one or more wearable devices during the user performance of the exercise movement.
In some implementations, the data processing engine 204 receives contextual user data from a variety of third-party APIs 136 for online services 111 outside of an active workout session of a user. Example contextual user data that the data processing engine 204 collects includes, but is not limited to, sleep quality data of the user from a web API of a wearable sleep tracking device, physical activity data of the user from a web API of a fitness tracker device, calories and nutritional data from a web API of a calorie counter application, manually inputted gym workout routines, cycling, running, and competition (e.g. marathon, 5K run, etc.) participation statistics from a web API of a fitness mobile application, a calendar schedule of a user from a web API of a calendar application, social network contacts of a user from a web API of a social networking application, purchase history data from a web API of an e-commerce application, etc. This contextual user data is added to the existing user workout data performed on the interactive personal training device 108 to recommend to the user a workout program based on fatigue levels (e.g., from exercise or poor sleep quality), or nutrient intake (e.g., lack of calories or excess) and exercises the user has performed outside of the interactive personal training device 108 to determine fitness of the user. The data processing engine 204 processes, correlates, integrates, and synchronizes the received sensor data stream and the contextual user data from disparate sources into a consolidated data stream as described herein. In some implementations, the data processing engine 204 time stamps the received sensor data at reception and uses the time stamps to correlate, integrate, and synchronize the received sensor data. For example, the data processing engine 204 synchronizes in time the sensor data received from the IMU sensor 132 on an equipment 134 with an image frame or depth map of the user performing the exercise movement captured by the sensor(s) 249 of the interactive personal training device 108.
In some implementations, the data processing engine 204 in an instance of the personal training application 110a on the interactive personal training device 108 performs preprocessing on the received data at the interactive personal training device 108 to reduce data transmitted over the network 105 to the personal training backend server 120 for analysis. The data processing engine 204 transforms the received data into a corrected, ordered, and simplified form for analysis. By preprocessing the received data at the interactive personal training device 108, the data processing engine 204 enables a low latency streaming of data to the personal training backend server 120 for requesting analysis and receiving feedback on the user performing the exercise movement. In one example, the data processing engine 204 receives image frames of a scene from a depth sensing camera on the interactive personal training device 108, removes non-moving parts in the image frames (e.g., background), and sends the depth information calculated for the foreground object to the personal training backend server 120 for analysis. Other data processing tasks performed by the data processing engine 204 to reduce latency may include one or more of data reduction, data preparation, sampling, subsampling, smoothing, compression, background subtraction, image cleanup, image segmentation, image rectification, spatial mapping, etc. on the received data. Also, the data processing engine 204 may determine a nearest personal training backend server 120 of a server cluster to send the data for analysis using network ping and associated response times. Other methods for improving latency include direct socket connection, DNS optimization, TCP optimization, adaptive frame rate, routing, etc. The data processing engine 204 sends the processed data stream to other components of the personal training application 110 for analysis and feedback.
In some implementations, the data processing engine 204 curates one or more training datasets 224 based on the data received in association with a plurality of interactive personal training devices 108, the third-party servers 140, and the plurality of client devices 130. The machine learning engine 206 described in detail below uses the training datasets 224 to train the machine learning models. Example training datasets 224 curated by the data processing engine 204 include, but not limited to, a dataset containing a sequence of images or video for a number of users engaged in physical activity synchronized with labeled time-series heart rate over a period of time, a dataset containing a sequence of images or video for a number of users engaged in physical activity synchronized with labeled time-series breathing rate over a period of time, a dataset containing a sequence of images or video for a number of repetitions relating to an labelled exercise movement (e.g., barbell squat) performed by a trainer, a dataset containing images for a number of labelled facial expressions (e.g., strained facial expression), a dataset containing images of a number of labelled equipment (e.g., dumbbell), a dataset containing images of a number of labelled poses (e.g., a downward phase of a squat barbell movement), etc. In some implementations, the data processing engine 204 accesses a publicly available dataset of images that may serve as a training dataset 224. For example, the data processing engine 204 may access a publicly available dataset to use as a training dataset 224 for training a machine learning model for object detection, facial expression detection, etc. In some implementations, the data processing engine 204 may create a crowdsourced training dataset 224. For example, in the instance where a user (e.g., personal trainers) consents to use of their content for creating a training dataset, the data processing engine 204 receives the video of the user performing one or more unlabeled exercise movements. The data processing engine 204 provides the video to remotely located reviewers that review the video, identify a segment of the video, classify and provide a label for the exercise movement present in the identified segment. The data processing engine 204 stores the curated training datasets 224 in the data storage 243.
The machine learning engine 206 may include software and/or logic to provide functionality for training one or more machine learning models 226 or classifiers using the training datasets created or aggregated by the data processing engine 204. In some implementations, the machine learning engine 206 may be configured to incrementally adapt and train the one or more machine learning models every threshold period of time. For example, the machine learning engine 206 may incrementally train the machine learning models every hour, every day, every week, every month, etc. based on the aggregated dataset. In some implementations, a machine learning model 226 is a neural network model and includes a layer and/or layers of memory units where memory units each have corresponding weights. A variety of neural network models may be utilized including feed forward neural networks, convolutional neural networks, recurrent neural networks, radial basis functions, other neural network models, as well as combinations of several neural networks. Additionally, or alternatively, the machine learning model 226 may represent a variety of other machine learning techniques in addition to neural networks, for example, support vector machines, decision trees, Bayesian networks, random decision forests, k-nearest neighbors, linear regression, least squares, hidden Markov models, other machine learning techniques, and/or combinations of machine learning techniques.
In some implementations, the machine learning engine 206 may train the one or more machine learning models 226 for a variety of machine learning tasks including estimating a pose (e.g., 3D pose (x, y, z) coordinates of keypoints), detecting an object (e.g., barbell, registered user), detecting a weight of the object (e.g., 45 lbs), edge detection (e.g., boundaries of an object or user), recognizing an exercise movement (e.g., dumbbell shoulder press, bodyweight push-up), detecting a repetition of an exercise movement (e.g., a set of 8 repetitions), detecting fatigue in the repetition of the exercise movement, detecting heart rate, detecting breathing rate, detecting blood pressure, detecting facial expression, detecting a risk of injury, etc. In another example, the machine learning engine 206 may train a machine learning model 226 to classify an adherence of an exercise movement performed by a user to predefined conditions for correctly performing the exercise movement. As a further example, the machine learning engine 206 may train a machine learning model 226 to predict the fatigue in a user performing a set of repetitions of an exercise movement. In some implementations, the machine learning model 226 may be trained to perform a single task. In other implementations, the machine learning model 226 may be trained to perform multiple tasks.
The machine learning engine 206 determines a plurality of training instances or samples from the labelled dataset curated by the data processing engine 204. A training instance can include, for example, an instance of a sequence of images depicting an exercise movement classified and labelled as barbell deadlift. The machine learning engine 206 may apply a training instance as input to a machine learning model 226. In some implementations, the machine learning engine 206 may train the machine learning model 226 using any one of at least one of supervised learning (e.g., support vector machines, neural networks, logistic regression, linear regression, stacking, gradient boosting, etc.), unsupervised learning (e.g., clustering, neural networks, singular value decomposition, principal component analysis, etc.), or semi-supervised learning (e.g., generative models, transductive support vector machines, etc.). Additionally, or alternatively, machine learning models 226 in accordance with some implementations may be deep learning networks including recurrent neural networks, convolutional neural networks (CNN), networks that are a combination of multiple networks, etc. The machine learning engine 206 may generate a predicted machine learning model output by applying training input to the machine learning model 226. Additionally, or alternatively, the machine learning engine 206 may compare the predicted machine learning model output with a known labelled output (e.g., classification of a barbell deadlift) from the training instance and, using the comparison, update one or more weights in the machine learning model 226. In some implementations, the machine learning engine 206 may update the one or more weights by backpropagating the difference over the entire machine learning model 226.
In some implementations, the machine learning engine 206 may test a trained machine learning model 226 and update it accordingly. The machine learning engine 206 may partition the labelled dataset obtained from the data processing engine 204 into a testing dataset and a training dataset. The machine learning engine 206 may apply a testing instance from the training dataset as input to the trained machine learning model 226. A predicted output generated by applying a testing instance to the trained machine learning model 226 may be compared with a known output for the testing instance to update an accuracy value (e.g., an accuracy percentage) for the machine learning model 226.
Some examples of training machine learning models for specific tasks relating to tracking user performance of exercise movements are described below. In one example, the machine learning engine 206 trains a Convolutional Neural Network (CNN) and Fast Fourier Transform (FFT) based spectro-temporal neural network model to identify photoplethysmography (PPG) in pulse heavy body parts, such as the face, the neck, biceps, wrists, hands, and ankles. The PPG is used to detect heart rate. The machine learning engine 206 trains the CNN and FFT based spectro-temporal neural network model using a training dataset including segmented images of pulse heavy body parts synchronized with the time-series data of heart rate over a period of time. In another example, the machine learning engine 206 trains a Human Activity Recognition (HAR)-CNN model to identify PPG in torso, arms, and head. The PPG is used to detect breathing rate and breathing intensity. The machine learning engine 206 trains the HAR-CNN model using a training dataset including segmented images of torso, arms, and head synchronized with the time-series data of breathing rate over a period of time. In another example, the machine learning engine 206 trains a Region-based CNN (R-CNN) model to infer 3D pose coordinates for keypoints, such as elbows, knees, wrists, hips, shoulder joints, etc. The machine learning engine 206 trains the R-CNN using a labelled dataset of segmented depth images of keypoints in user poses. In another example, the machine learning engine 206 trains a CNN model for edge detection and identifying boundaries of objects including humans in grayscale image using a labeled dataset of segmented images of objects including humans.
The feedback engine 208 may include software and/or logic to provide functionality for analyzing the processed stream of sensor data from the data processing engine 204 and providing feedback on one or more aspects of the exercise movement performed by the user. For example, the feedback engine 208 performs a real time “form check” on the user performing an exercise movement.
The pose estimator 302 receives the processed sensor data stream including one or more images from the data processing engine 204 depicting one or more users and estimates the 2D or 3D pose coordinates for each keypoint (e.g., elbows, wrists, joints, knees, etc.). The pose estimator 302 tracks a movement of one or more users in real-world space by predicting the precise location of keypoints associated with the users. For example, the pose estimator 302 receives the RGB image and associated depth map, inputs the received data into a trained convolutional neural network for pose estimation, and generates 3D pose coordinates for one or more keypoints associated with a user. The pose estimator 302 generates a heatmap predicting the probability of the keypoint occurring at each pixel. In some implementations, the pose estimator 302 detects and tracks a static pose in a number of continuous image frames. For example, the pose estimator 302 classifies a pose as a static pose if the user remains in that pose for at least 30 image frames (2 seconds if the image frames are streaming at 15 FPS). The pose estimator 302 determines a position, an angle, a distance, and an orientation of the keypoints based on the estimated pose. For example, the pose estimator 302 determines a distance between the two knees, an angle between a shoulder joint and an elbow, a position of the hip joint relative to the knee, and an orientation of the wrist joint in an articulated pose based on the estimated 3D pose data. The pose estimator 302 determines an initial position, a final position, and a relative position of a joint in a sequence of a threshold number of frames. The pose estimator 302 passes the 3D pose data including the determined position, angle, distance, and orientation of the keypoints to other components 304, 306, 308, 310, 312, and 314 in the feedback engine 208 for further analysis.
In some implementations, the pose estimator 302 analyzes the sensor data including one or more images captured by the interactive personal training device 108 to generate anthropometric measurements including a three-dimensional view of the user's body. For example, the interactive personal training device 108 may receive a sequence of images that capture the details of the user's body in 360 degrees. The pose estimator 302 uses the combination of the sequence of images to generate a 3D visualization (e.g., avatar) of user's body and provides an estimate for body measurements (e.g., arms, thighs, hips, waist, etc.). The pose estimator 302 also determines body size, body shape, and body composition of the user. In some implementations, the pose estimator 302 generates a 3D model of the user (shown in
The object detector 304 receives the processed sensor data stream including one or more images from the data processing engine 204 and detects one or more objects (e.g., equipment 134) utilized by a user in association with performing an exercise movement. The object detector 304 detects and locates an object in the image using a bounding box encompassing the detected object. For example, the object detector 304 receives the RGB image and associated depth map, inputs the received data into a trained You Only Look Once (YOLO) convolutional neural network for object detection, detects a location of an object (e.g., barbell with weight plates) and an estimated weight of the object. In some implementations, the object detector 304 determines a weight associated with the detected object by performing optical character recognition (OCR) on the detected object. For example, the object detector 304 detects markings designating a weight of a dumbbell in kilograms or pounds. In some implementations, the object detector 304 identifies the type and weight of the weight equipment based on the IMU sensor data associated with the weight equipment. The object detector 304 instructs the user interface engine 216 to display a detection of a weight equipment on the interactive screen of the interactive personal training device 108. For example, as the user picks up a weight equipment equipped with an IMU sensor, the object detector 304 identifies the type as a dumbbell and weight as 25 pounds and the user interface engine 216 displays a text “25 pound Dumbbell Detected.” In some implementations, the object detector 304 performs edge detection for segmenting boundaries of objects including one or more users within the images received over a time frame or period of time. The object detector 304 in cooperation with the action recognizer 306 (described below) uses a trained CNN model on the segmented images of a user extracted using edge detection to classify an exercise movement (e.g., squat movement) of the user. In such implementations, the 3D pose data may be deficient for classifying the exercise movement of the user and thus leading the feedback engine 208 to use edge detection as an alternative option. The action recognizer 306 may use either the estimated 3D pose data or the edge detection data or appropriately weight (e.g., 90% weighting to 3D pose data, 10% weighting to edge detection data) them both for optimal classification of exercise movement. In some implementations, the object detector 304 implements background subtraction to extract the detected object in the foreground for further processing. The object detector 304 determines a spatial distance of the object relative to the user as well as the floor plane or equipment. In some implementations, the object detector 304 detects the face of the user in the one or more images for facial authentication to use the interactive personal training device 108. The object detector 304 may analyze the images to detect a logo on a fitness apparel worn by the user, a style of the fitness apparel, and a fit of the fitness apparel. The object detector 304 passes the object detection data to other components 306, 308, 310, 312, and 314 in
The action recognizer 306 receives the estimated 3D pose data including the determined position, angle, distance, and orientation of the keypoints from the pose estimator 302 for analyzing the action or exercise movement of the user. In some implementations, the action recognizer 306 sends the 3D pose data to a separate logic defined for each exercise movement. For example, the logic may include a set of if-else conditions to determine whether a detected pose is part of the exercise movement. The action recognizer 306 scans for an action in the received data every threshold number (e.g., 100 to 300) of image frames to determine one or more exercise movements. An exercise movement may have two or more articulated poses that define the exercise movement. For example, a jumping jack is a physical jumping exercise performed by jumping to a first pose with the legs spread wide and hands going overhead, sometimes in a clap, and then returning to a second pose with the feet together and the arms at the sides. The action recognizer 306 determines whether a detected pose in the received 3D pose data matches one of the articulated poses for the exercise movement. The action recognizer 306 further determines whether there is a change in the detected poses from a first articulated pose to a second articulated pose defined for the exercise movement in a threshold number of image frames. Accordingly, the action recognizer 306 identifies the exercise movement based on the above determinations. In the instance of detecting a static pose in the received 3D pose data for a threshold number of frames, the action recognizer 306 determines that the user has stopped performing the exercise movement. For example, a user after performing a set of repetitions of an exercise movement may place their hands on knees in a hunched position to catch their breath. The action recognizer 306 identifies such a static pose as not belonging to any articulated poses for purposes of exercise identification and determines that the user is simply at rest.
The action recognizer 306 receives data including object data from the object detector 304 indicating a detection of an equipment utilized by a user in association with performing the exercise movement. The action recognizer 306 determines a classification of the exercise movement based on the use of the equipment. For example, the action recognizer 306 receives 3D pose data for a squat movement and a bounding box for the object detection performed on the barbell and plates equipment combination and classifies the exercise movement as a barbell squat exercise movement. In some implementations, the action recognizer 306 directs the data including the estimated 3D pose data, the object data, and the one or more image frames into a machine learning model (e.g. Human Activity Recognition (HAR)-convolutional neural network) trained for classifying each exercise movement and identifies a classification of the associated exercise movement. In one example, the HAR convolutional neural network may be trained to classify a single exercise movement. In another example, the HAR convolutional neural network may be trained to classify multiple exercise movements. In some implementations, the action recognizer 306 directs the data including the object data, the edge detection data, and the one or more image frames into a machine learning model (e.g. convolutional neural network) trained for classifying each exercise movement and identifies a classification of the associated exercise movement without using 3D pose data. The action recognizer 306 passes the exercise movement classification results to other components 308, 310, 312, and 314 in
The repetition counter 308 receives data including the estimated 3D pose data from the pose estimator 302 and the exercise classification result from the action recognizer 306 for determining the consecutive repetitions of an exercise movement. The repetition counter 308 identifies a change in pose over several consecutive image frames of the user from a static pose to one of the articulated poses of the identified exercise movement in the received 3D pose data as the start of the repetition. The repetition counter 308 scans for a change of pose of an identified exercise movement from a first articulated pose to a second articulated pose every threshold number (e.g., 100 to 300) of image frames. The repetition counter 308 counts the detected change in pose (e.g., from a first articulated pose to a second articulated pose) as one repetition of that exercise movement and increases a repetition counter by one. When the repetition counter 308 detects static pose for a threshold number of frames after a series of changing articulated poses for the identified exercise movement, the repetition counter 308 determines that the user has stopped performing the exercise movement, generates a count of the consecutive repetitions detected so far for that exercise movement, and resets the repetition counter. It should be understood that the same HAR convolutional neural network used for recognizing an exercise movement may also be used or implemented by the repetition counter 308 in repetition counting. The repetition counter 308 may instruct the user interface engine 216 to display the repetition counting in real time on the interactive screen of the interactive personal training device 108. The repetition counter 308 may instruct the user interface engine 216 to present the repetition counting via audio on the interactive personal training device 108. The repetition counter 308 may instruct the user interface engine 216 to cause one or more light strips on the frame of the interactive personal training device 108 to pulse for repetition counting. In some implementations, the repetition counter 308 receives edge detection data including segmented images of the user actions over a threshold period of time and process the received data for identifying waveform oscillations in the signal stream of images. An oscillation may be present when the exercise movement is repeated. The repetition counter 308 determines a repetition of the exercise movement using the oscillations identified in the signal stream of images.
The movement adherence monitor 310 receives data including the estimated 3D pose data from the pose estimator 302, the object data from the object detector 304, the exercise classification result from the action recognizer 306, and the consecutive repetitions of the exercise movement from the repetition counter 308 for determining whether the user performance of one or more repetitions of the exercise movement adhere to predefined conditions or thresholds for correctly performing the exercise movement. A personal trainer or a professional may define conditions for a proper form associated with performing an exercise movement. In some implementations, the movement adherence monitor 310 may use a CNN model on a dataset containing repetitions of an exercise movement to determine the conditions for a proper form. The form may be defined as a specific way of performing the exercise movement to avoid injury, maximize benefit, and increase strength. To this end, the personal trainer may define the position, angle, distance, and orientation of joints, wrists, ankles, elbows, knees, back, head, etc. in the recognized way of performing a repetition of the exercise movement. In some implementations, the movement adherence monitor 310 compares whether the user performance of the exercise movement in view of body mechanics associated with correctly performing the exercise movement falls within acceptable range or threshold for human joint positions and movements. In some implementations, the movement adherence monitor 310 uses a machine learning model, such as a convolutional neural network trained on a large set of ideal or correct repetitions of an exercise movement to determine a score or a quality of the exercise movement performed by the user based at least on the estimated 3D pose data and the consecutive repetitions of the exercise movement. For example, the score (e.g., 85%) may indicate the adherence to predefined conditions for correctly performing the exercise movement. The movement adherence monitor 310 sends the score determined for the exercise movement to the recommendation engine 208 to generate one or more recommendations for the user to improve the score.
Additionally, the movement adherence monitor 310 receives data including processed sensor data relating to an IMU sensor 132 on the equipment 134 according to some implementations. The movement adherence monitor 310 determines equipment related data including acceleration, spatial location, orientation, and duration of movement of the equipment 134 in association with the user performing the exercise movement. The movement adherence monitor 310 determines an actual motion path of the equipment 134 relative to the user based on the acceleration, the spatial location, the orientation, and duration of movement of the equipment 134. The movement adherence monitor 310 determines a correct motion path using the predefined conditions for the recognized way of performing the exercise movement. The movement adherence monitor 310 compares the actual motion path and the correct motion path to determine a percentage difference to an ideal or correct movement. If the percentage difference satisfies and/or exceeds a threshold (e.g., 5% and above), the movement adherence monitor 310 instructs the user interface engine 216 to present an overlay of the correct motion path on the display of the interactive personal training device 108 to guide the exercise movement of the user toward the correct motion path. If the percentage difference is within threshold (e.g., between 1% and 5% variability), the movement adherence monitor 310 sends instructions to the user interface engine 216 to present the percentage difference to ideal movement on the display of the interactive personal training device 108. In other implementations, the movement adherence monitor 310 may instruct the user interface engine 216 to display a movement range meter indicating how close the user is performing an exercise movement according to conditions predefined for the exercise movement. Additionally, the movement adherence monitor 310 may instruct the user interface engine 216 to display optimal acceleration and deceleration curve in the correct motion path for performing a repetition of the exercise movement.
The status monitor 312 receives the processed sensor data including the images and estimated 3D pose data from the pose estimator 302 for determining and tracking vital signs and health status of the user during and after the exercise movement. For example, the status monitor 312 uses multispectral imaging technique on data including the received images to identify small changes in the RGB (Red, Green, and Blue) spectrum of the user's face and determine remote heart rate readings based on photoplethysmography (PPG). The status monitor 312 stabilizes the movements in the received images by applying smoothing before determining the remote heart rate readings. In some implementations, the status monitor 312 uses trained machine learning classifiers to determine the health status of the user. For example, the status monitor 312 inputs the RGB sequential images, depth map, and 3D pose data into a trained convolutional neural network for determining one or more of a heart rate, heart rate variability, breathing rate, breathing intensity, blood pressure, facial expression, sweat, etc. In some implementations, the status monitor 312 also receives data relating to the measurements recorded by the wearable devices and uses them to supplement the tracking of vital signs and health status. For example, the status monitor 312 determines an average heart rate based on the heart rate detected using a trained convolutional neural network and a heart rate measured by a heart rate monitor device worn by the user while performing the exercise movement. In some implementations, the status monitor 312 may instruct the user interface engine 216 to display the tracked vital signs and health status on the interactive personal training device 108 in real time as feedback. For example, the status monitor 312 may instruct the user interface engine 216 to display the user's heart rate on the interactive screen of the interactive personal training device 108.
The performance tracker 314 receives the output generated by other components 302, 304, 306, 308, 310, 312 and 314 of the feedback engine 208 in addition to the processed sensor data stream from the data processing engine 204. The performance tracker 314 determines performance statistics and metrics associated with user workout. The performance tracker 314 enables filtering of the performance statistics and metrics by time range, comparing of the performance statistics and metrics from two or more time ranges, and comparing the performance statistics and metrics with other users. The performance tracker 314 instructs the user interface engine 216 to display the performance statistics and metrics on the interactive screen of the interactive personal training device 108. In one example, the performance tracker 314 receives the estimated 3D pose data, the object detection data, the exercise movement classification data, and duration of the exercise movement to determine the power generated by the exercise movement. In another example, the performance tracker 314 receives information on the amount of weight lifted and the number of repetitions in the exercise movement to determine a total weight volume. In another example, the performance tracker 314 receives the estimated 3D pose data, the number of repetitions, equipment related IMU sensor data, and duration of the exercise movement to determine time-under-tension. In another example, the performance tracker 314 determines the amount of calories burned using the metrics output, such as time-under-tension, power generated, total weight volume, and the number of repetitions. In another example, the performance tracker 314 determines a recovery rate indicating how fast a user recovers from a set or workout session using the metrics output, such as power generated, time-under-tension, total weight volume, duration of activity, heart rate, detected facial expression, breathing intensity, and breathing rate.
Other examples of performance metrics and statistics include, but not limited to, total rest time, energy expenditure, current and average heart rate, historical workout data compared with current workout session, completed repetitions in an ongoing workout set, completed sets in an ongoing exercise movement, incomplete repetition, etc. The performance tracker 314 derives total exercise volume from individual workout sessions over a length of time, such as daily, weekly, monthly, and annually. The performance tracker 314 determines total time under tension expressed in seconds or milliseconds using active movement time and bodyweight or equipment weight. The performance tracker 314 determines a total time of exercise expressed in minutes as total length of workout not spent in recovery or rest. The performance tracker 314 determines total rest time from time spent in idle position, such as standing, lying down, hunched over, or sitting. The performance tracker 314 determines total weight volume by multiplying bodyweight by number of repetitions for exercises without weights and multiplying equipment weight by number of repetitions for exercises with weights. As a secondary metric, the performance tracker 314 derives work capacity by dividing the total weight volume by total time of exercise. The performance tracker 314 cooperates with the personal training engine 202 to store the performance statistics and metrics in association with the user profile 222 in the data storage 243. In some implementations, the performance tracker 314 retrieves historical user performance of a workout similar to a current workout of the user and generates a summary comparing the historical performance metrics with the current workout as a percentage to indicate user progress.
Referring to
Referring back to
In some implementations, the recommendation engine 210 processes the aggregate user dataset to tag a number of action sequences where multiple users in the identified community of users perform a plurality of repetitions of a specific exercise movement (e.g., barbell squat). The recommendation engine 210 uses the tagged sequences from the aggregate user dataset to train a machine learning model (e.g., CNN) to identify or predict a level of fatigue in the exercise movement. A fatigue in exercise movement may be apparent from a user's inability to move a weight equipment or their own bodyweight at a similar speed, consistency, and steadiness over the several repetitions of the exercise movement. The recommendation engine 210 processes the sequence of user's repetitions of performing the exercise movement using the trained machine learning model to classify the user's experience with the exercise movement and determine the user's current state of fatigue and ability to continue performing the exercise movement. In some implementations, the recommendation engine may track fatigue by muscle group. Additionally, the recommendation engine 210 uses contextual user data including sleep quality data, nutritional intake data, and manually tracked workouts outside the context of the interactive personal training device 108 to predict a level of user fatigue.
The recommendation engine 210 generates on-the-fly recommendation to modify or alter the user exercise workout based on the state or level of fatigue of the user. For example, the recommendation engine 210 may recommend to the user to push for As Many Repetitions As Possible (AMRAP) in the last set of an exercise movement if the level of fatigue of the user is low. In another example, the recommendation engine 210 may recommend to the user to reduce the number of repetitions from 10 to five on a set of exercise movements if the level of fatigue of the user is high. In another example, the recommendation engine 210 may recommend to the user to increase weight on the weight equipment by 10 pounds if the level of fatigue of the user is low. In yet another example, the recommendation engine 210 may recommend to the user to decrease weight on the weight equipment by 20 pounds if the level of fatigue of the user is high. The recommendation engine 210 may also take into account any personally set objectives, a last occurrence of a workout session, number of repetitions of one or more exercise movements, heart rate, breathing rate, facial expression, weight volume, etc. to generate a recommendation to modify the user exercise workout to prevent a risk of injury. For example, the recommendation engine 210 uses Heart Rate Variability (PPG-HRV) in conjunction with exercise analysis to recommend a change in exercise patterns (e.g. if PPG-HRV is poor, recommend a lighter workout). In some implementations, the recommendation engine 210 instructs the user interface engine 216 to display the recommendation on the interactive screen of the interactive personal training device 108 after the user completes a set of repetitions or at the end of the workout session. Example recommendations may include a set amount of weight to pull or push, a number of repetitions to perform (e.g., Push for one more rep), a set amount of weight to increase on an exercise movement (e.g., Add 10 pound plate for barbell deadlift), a set amount of weight to decrease on an exercise movement (e.g., Remove 20 pound plate for barbell squat), a change in an order of exercise movements, change a cadence of the repetition, increase a speed of an exercise movement, decrease a speed of an exercise movement (e.g. Reduce the duration of eccentric movement by 1 second to achieve 10% strength gain over 2 weeks), an alternative exercise movement (e.g., Do goblet squat instead) to achieve a similar exercise objective, a next exercise movement, a stretching mobility exercise to improve a range of motion, etc.
In some implementations, the recommendation engine 210 receives actual motion path in association with a user using an equipment 134 performing an exercise movement from the movement adherence monitor 310 in the feedback engine 208. The recommendation engine 210 determines the direction of force used by the user in performing the exercise movement based on the actual motion path. If the percentage difference between the actual motion path and the correct motion path is not within a threshold limit, the recommendation engine 210 instructs the user interface engine 216 to generate an alert on the interactive screen of the interactive personal training device 108 informing the user to decrease force in the direction of the actual motion path to avoid injury. In some implementations, the recommendation engine 210 instructs the user interface engine 216 to generate an overlay over the reflected image of the user performing the exercise movement to show what part of their body is active in the exercise movement. For example, the user may be shown with their thigh region highlighted by an overlay in the interactive personal training device 108 to indicate that their quadriceps muscle group are active during a squat exercise movement. By viewing this overlay, the user may understand which part of their body must feel being worked in the performance of a particular exercise movement. In some implementations, the recommendation engine 210 instructs the user interface engine 216 to generate an overlay of the user's prior performance of an exercise movement over the reflected image of the user performing the same exercise movement to show the user their past repetition and speed from a previous workout session. For example, the user may remember how the exercise movement was previously performed by viewing an overlay of their prior performance on the interactive screen of the interactive personal training device 108. In another example, the recommendation engine 210 may overlay a personal trainer performing the exercise movement on the interactive screen of the interactive personal training device 108. The recommendation engine 210 may determine a score for the repetitions of the exercise movement and show comparative progress of the user in performing the exercise movement from prior workouts.
In some implementations, the recommendation engine 210 receives the user profile of a user, analyzes the profile of the user, and generates one or more recommendations based on the user profile. The recommendation engine 210 recommends an optimal workout based on the historical performance statistics and workout pattern in the user profile. For example, the recommendation engine 210 instructs the user interface engine 216 to generate a workout recommendation tile on the interactive screen of the interactive personal training device 108 based on profile attributes, such as a last time the user exercised a particular muscle group, an intensity level (e.g., heart rate) of a typical workout session, a length of the typical workout session, the number of days since the last workout session, an age of the user, sleep quality data, etc. The recommendation engine 210 uses user profiles of other similarly performing users in generating workout recommendations for a target user. For example, the recommendation engine 210 analyzes the user profiles of similarly performing users who have done similar workouts, their ratings for the workouts, and their overall work capacity progress similar to the target user to generate recommendations.
In some implementations, the recommendation engine 210 recommends fitness-related items for user purchase based on the user profile. For example, the recommendation engine 210 determines user preference for a fitness apparel based on the detected logo on their clothing and recommends similar or different fitness apparel for the user to purchase. The recommendation engine 210 may identify the fit and style of the fitness apparel typically worn by the user and accordingly generate purchase recommendations. In another example, the recommendation engine 210 may recommend to the user the fitness apparel worn by a personal trainer to whom the user subscribes for daily workouts. The recommendation engine 210 may instruct the user interface engine 216 to generate an augmented reality overlay of the selected fitness apparel over the reflected image of the user to enable the user to virtually try on the purchase recommendations before purchasing. The recommendation engine 210 cooperates with a web API of an e-commerce application on the third-party server 140 to provide for frictionless purchasing of items via the interactive personal training device 108.
In some implementations, the recommendation engine 210 recommends to the user a profile of a personal trainer or another user to subscribe and follow. The recommendation engine 210 determines workout history, training preferences, fitness goals, etc. of a user based on their user profile and recommends other users who may have more expertise and share similar interests or fitness goals. For example, the recommendation engine 210 generates a list of top 10 users who are strength training enthusiasts matching the interests of a target user on the platform. Users can determine what these successful users have done to achieve their fitness goals at an extremely granular level. The user may also follow other users and personal trainers by subscribing to the workout feed on their user profiles. In addition to the feed that provides comments, instructions, tips, workout summaries and history, the user may see what workouts they are doing and then perform those same workouts with the idea of modelling their favorite users.
The gamification engine 212 may include software and/or logic to provide functionality for managing, personalizing, and gamifying the user experience for exercise workout. The gamification engine 212 receives user performance data, user workout patterns, user competency level, user fitness goals, and user preferences from other components of the personal training application 110 and unlocks one or more workout programs (e.g., live instruction and on-demand classes), peer-to-peer challenges, and new personal trainers. For example, the gamification engine 212 rewards the user by unlocking a new workout program more challenging than a previous workout program that the user has successfully completed. This helps safeguard the user from trying out challenging or advanced workouts very early in their fitness journey and losing motivation to continue their workout. The gamification engine 212 determines a difficulty associated with a workout program based at least on heart rate, lean muscle mass, body fat percentage, average recovery time, exercise intensity, strength progression, work capacity, etc. required to complete the workout program in a given amount of time. Users gain access to new unlocked workout programs based on user performance from doing every repetition and moving appropriate weights in those repetitions for exercise movements in prior workout programs.
The gamification engine 212 instructs the user interface engine 216 to stream the on-demand and live instruction classes for the user on the interactive screen of the interactive personal training device 108. The user may see the instructor, perform the exercise movement via the streaming video and follow their instruction. The instructor may commend the user on a job well done in a live class based on user performance statistics and metrics. The gamification engine 212 may configure a multiuser communication session (e.g., video chat, text chat, etc.) for a user to interact with the instructor or other users attending the live class via their smartphone device or interactive personal training device 108. In some implementations, the gamification engine 212 manages booking of workout programs and personal trainers for a user. For example, the gamification engine 212 receives a user selection of an upcoming workout class or an unlocked and available personal trainer for a one-on-one training session on the interactive screen of the interactive personal training device 108, books the selected option, and sends a calendar invite to the user's digital calendar. In some implementations, the gamification engine 212 configures two or more interactive personal training devices 108 at remote locations for a partner-based workout session using end-to-end live video streaming and voice chat. For example, a partner-based workout session allows a first user to perform one set of exercise movements and a second user (e.g., a partner of the first user) to perform the next set of exercise movements.
The gamification engine 212 enables a user to subscribe to a personal trainer, coach, or a pro-athlete for obtaining individualized coaching and personal training via the interactive personal training device 108. For example, the personal trainer, coach, or pro-athlete may create a subscription channel of live and on-demand fitness streaming videos on the platform and a user may subscribe to the channel on the interactive personal training device 108. Through the channel, the personal trainer, coach, or pro-athlete may offer free group classes and/or fee-based one-on-one personal training to other users. The channel may offer program workouts curated by the personal trainer, coach, or pro-athlete. The program workouts may contain video of exercise movements performed by personal trainer, coach, or pro-athlete for the subscribing user to follow and receive feedback in real time on the interactive personal training device 108. In some implementations, the gamification engine 212 facilitates for the creator of the program workout to review workout history including a video of the subscribing user performing the exercise movements and performance statistics and metrics of the user. They may critique the user's form and provide proprietary tips and suggestions to the user to improve their performance.
The gamification engine 212 allows users to earn achievement badges by completing a milestone that qualify them as competent. The gamification engine 212 monitors the user performance data on a regular basis and suggests new achievement badges to unlock or presents the achievement badges to the user to associate with their user profile in the community of users. For example, the achievement badges may include one or more of a badge for completing a threshold number of workout sessions consistently, a badge for reaching a power level ‘n’ in strength training, a badge for completing a fitness challenge, a badge for unlocking access to a more difficult workout session, a badge for unlocking and winning a peer competition with other users of similar competence and performance levels, a badge for unlocking access to a particular personal trainer, etc. In some implementations, the gamification engine 212 allows the users to share their data including badges, results, workout statistics and performance metrics with a social network of user's choice. The gamification engine 212 receives likes, comments, and other user interactions on the shared user data and displays them in association with the user profile. The gamification engine 212 cooperates with the pose estimator 302 to generate a 3D body scan for accurately visualizing the body transformations of users including body rotations over time and enables sharing of the body transformations on a social network.
In some implementations, the gamification engine 212 may generate a live leaderboard allowing users to view how they rank against their peers on a plurality of performance metrics. For example, the leaderboard may present the user's ranking against friends, regional communities, and/or the entire community of users. The ranking of users shown on the leaderboard can be sorted by a plurality of performance metrics. The plurality of performance metrics may include, for example, overall fitness (strength, endurance, total volume, volume under tension, power, etc.), overall strength, overall endurance, most number of workouts, age, gender, age groups, similar performance, number of peer-to-peer challenges won, champions, attendance in most number of classes, open challenges, etc. In some implementations, the gamification engine 212 may create a matchup between two users on the leaderboard or from personal contacts on the platform to compete on a challenge based on their user profiles. For example, users may be matched up based on similar performance metrics and workout history included in the user profiles. A fitness category may be selected on which to challenge and compete including, for example, a time-based fitness challenge, a strength challenge, an exercise or weight volume challenge, endurance challenge, etc. In some implementations, the challenge may be public or visible only to the participants.
The program enhancement engine 214 may include software and/or logic to provide functionality for enhancing one or more workout programs created by third-party content providers or users, such as personal trainers, coaches, and pro-athletes. The program enhancement engine 214 provides access to personal trainers, coaches, and pro-athletes to create a set of exercise movements or workouts that may be enhanced using the feedback engine 208. For example, the feedback engine 208 analyzes the exercise movement in the created workout to enable a detection of repetition counting and the display of feedback in association with the exercise movement when it is performed by a user subscriber on the interactive personal training device 108. The program enhancement engine 214 receives a video of a user performing one or more repetitions of an exercise movement in the new workout program. The enhancement engine 214 analyzes the video using the pose estimator 302 to estimate pose data relating to performing the exercise movement. The enhancement engine 214 instructs the user interface engine 216 to generate a user interface to receive from the user an input (e.g., ground truth) indicating the position, the angle, and the relative distance between the detected keypoints in a segment of the video containing a repetition of the exercise movement from start to end. For example, the user uploads a video of the user performing a combination of a front squat movement and a standing overhead press movement. The user specifies the timestamps in the video segment that contain this new combination of exercise movement and sets conditions or acceptable thresholds for completing a repetition including angles and distance between keypoints, speed of movement, and range of movement. The enhancement engine 214 creates and trains a machine learning model for classifying the exercise movement using the user input as initial weights of the machine learning model and the video of the user performing the repetitions of the exercise movement. The enhancement engine 214 then applies this machine learning model on a plurality of videos of users performing repetitions of this exercise movement from the new workout program. The enhancement engine 214 determines a performance of the machine learning model to classify the exercise movement in the plurality of videos. This performance data and associated manual labelling of incorrect classification is used to retrain the machine learning model to maximize the classification of the exercise movement and to provide feedback including repetition counting to user subscribers training with the new workout program.
The user interface engine 216 may include software and/or logic for providing user interfaces to a user. In some embodiments, the user interface engine 216 receives instructions from the components 202, 204, 206, 208, 210, 212, and 214, generates a user interface according to the instructions, and transmits the user interface for display on the interactive personal training device 108. In some implementations, the user interface engine 216 sends graphical user interface data to an application in the device 108 via the communication unit 241 causing the application to display the data as a graphical user interface.
A system and method for tracking physical activity of a user performing exercise movements and providing feedback and recommendations relating to performing the exercise movements has been described. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the techniques introduced above. It will be apparent, however, to one skilled in the art that the techniques can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description and for ease of understanding. For example, the techniques are described in one embodiment above primarily with reference to software and particular hardware. However, the present invention applies to any type of computing system that can receive data and commands, and present information as part of any peripheral devices providing services.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some portions of the detailed descriptions described above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are, in some circumstances, used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The techniques also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories including USB keys with non-volatile memory or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
Some embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. One embodiment is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, some embodiments can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
A data processing system suitable for storing and/or executing program code can include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
Finally, the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description above. In addition, the techniques are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the various embodiments as described herein.
The foregoing description of the embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the specification to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the embodiments be limited not by this detailed description, but rather by the claims of this application. As will be understood by those familiar with the art, the examples may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, routines, features, attributes, methodologies and other aspects are not mandatory or significant, and the mechanisms that implement the description or its features may have different names, divisions and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, routines, features, attributes, methodologies and other aspects of the specification can be implemented as software, hardware, firmware or any combination of the three. Also, wherever a component, an example of which is a module, of the specification is implemented as software, the component can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, and/or in every and any other way known now or in the future to those of ordinary skill in the art of computer programming. Additionally, the specification is in no way limited to embodiment in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure is intended to be illustrative, but not limiting, of the scope of the specification, which is set forth in the following claims.
The present application claims priority, under 35 U.S.C. § 119, of U.S. Provisional Patent Application No. 62/872,766, filed Jul. 11, 2019 and entitled “Exercise System including Interactive Display and Method of Use,” which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62872766 | Jul 2019 | US |