EXERCISE APPARATUS WITH INTEGRATED HOLOGRAPHIC DISPLAY

Abstract
Stationary exercise machine with integrated holographic display to simulate depth and motion is disclosed. The disclosure uses an integrated and responsive 3-dimensional (3D) or holographic display attached to and/or integrated with stationary exercise equipment to create a more immersive, engaging and enjoyable stationary exercise experience. The 3D or holographic display provides a more stimulating sensory experience and can better simulate the perception of depth and motion through a 3D virtual environment.
Description
FIELD

This relates to an exercise apparatus and, more specifically, to an exercise machine with an integrated holographic display.


BACKGROUND OF THE INVENTION

Current stationary exercise equipment like treadmills, ellipticals, rowing machines, stationary bikes and weight machines can be monotonous to use. While some come with integrated displays for entertainment while exercising, these displays fail to provide rich sensory immersion and the sense of motion that make outdoor exercise like running, rowing or cycling engaging to athletes.


Some existing stationary exercise equipment with 2-dimensional (2D) displays attempt to mitigate the monotony of stationary exercise with on-screen media, but they do not provide the stereoscopic visuals and head-motion parallax that are required to simulate depth perception and convincing motion through 3-dimensional (3D) space, making them less immersive and less engaging than their non-stationary counterparts for outdoor course or indoor track exercise. Stationary exercise is advantageous for several reasons including that it can be done from the comfort of one's home or gym, regardless of weather conditions outside. However, current stationary exercise equipment does not offer the immersive visuals and perception of motion that can increase enjoyment and engagement and enhance therapeutic mental health benefits during exercise. While other systems use virtual reality and augmented reality headsets or eyewear connected to exercise equipment to simulate motion, they are also cumbersome to wear and are prone to cause nausea or motion-sickness.


There is no existing exercise equipment that can provide an immersive experience without requiring the user to wear a headset.


SUMMARY

Embodiments of the disclosure use an integrated and responsive 3D or holographic display attached to and/or integrated with stationary exercise equipment to create a more immersive, engaging and enjoyable stationary exercise experience. The 3D or holographic display provides a more stimulating sensory experience and can better simulate the perception of motion through a 3D virtual environment.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. These drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings.



FIG. 1 illustrates an exemplary exercise apparatus, according to an embodiment of the disclosure.



FIG. 2 illustrates another exemplary exercise apparatus, according to an embodiment of the disclosure.



FIG. 3 is a block diagram illustrating the exemplary hardware components of an exercise apparatus.



FIG. 4 is a flowchart illustrating the exemplary steps of a method of providing an immersive experience using exercise apparatus, according to an embodiment of the disclosure.



FIG. 5 is a block diagram illustrating the exemplary components of an exercise apparatus, according to an embodiment of the disclosure.



FIG. 6 is a block diagram illustrating a network of devices that can generate performance baselines, according to an embodiment of the disclosure.



FIG. 7 is a flow chart illustrating the exemplary steps in a method of verifying performance of a user of an exercise apparatus, according to an embodiment of the disclosure.



FIG. 8a is a block diagram illustrating the exemplary components of a system for generating enhanced graphics for providing an immersive experience, according to an embodiment of the disclosure.



FIG. 8b illustrates the exemplary steps in a method of generating enhanced graphics to be displayed in real time as the user exercises on an exercise apparatus, according to an embodiment of the disclosure.



FIG. 9 is a 2-dimensional illustration of the zoom effect on a 3D or holographic display, according to an embodiment of the disclosure.



FIG. 10a is a block diagram illustrating the exemplary components of a system for generating enhanced graphics for providing an immersive experience, according to an embodiment of the disclosure.



FIG. 10b illustrates the exemplary steps in another method of generating enhanced graphics to be displayed in real time as the user exercises on an exercise apparatus, according to an embodiment of the disclosure.



FIG. 11a illustrates the exemplary components of a system for inducing neural entrainment in the user while using an exercise apparatus such as the stationary bike of FIG. 1 and the treadmill of FIG. 2.



FIG. 11b illustrates the exemplary steps in a method of inducing neural entrainment in the user of an exercise apparatus, according to an embodiment of the disclosure.



FIGS. 12a and 12b illustrate exemplary exercise apparatuses, according to other embodiments of the disclosure.





DETAILED DESCRIPTION OF THE INVENTION

In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.


The terms “exercise apparatus,” “exercise equipment,” and “exercise machine” are used interchangeably in this document.


The terms “holographic”, “3D”, “spatial”, “volumetric” are used interchangeably in reference to display technology. They describe various systems that enable on-screen visuals to appear different to each of the user's eyes (i.e. stereoscopic visuals) and based on the position of the user's head (i.e. head-motion parallax), which serve to accurately simulate depth and motion through 3D space. These systems can include lenticular displays, retinal projectors, and/or light-field displays, as well as face-tracking cameras to optimize imagery based on the position of the user's eyes and head. These methods of display do not require headwear or eyewear, as in other virtual or augmented reality systems.


As stated above, current stationary exercise equipment like treadmills, ellipticals and stationary bikes can be monotonous to use. While some come with integrated displays for entertainment while exercising, these displays fail to provide rich sensory immersion and the sense of motion that make outdoor exercise like running or cycling engaging to athletes. Embodiments of the disclosure solve this problem.


To make stationary exercise machines more immersive and engaging, as well as to enhance their physical and mental health benefits, embodiments of the disclosure integrate an holographic display that responds when, for example, force is exerted on the stationary exercise machine (e.g. when the user pedals a stationary bicycle), changing the imagery on screen to simulate motion through a virtual environment.


In some embodiments, the 3D display can also be utilized more generally to make on-screen content more engaging and immersive (e.g. displaying 3D movies as the user exercises). To further increase the immersiveness of the exercise experience, the spatial display on the exercise equipment can integrate with spatial speakers to simulate sound in virtual space.


Embodiments of the disclosure differ from what currently exists. The embodiments use a holographic display attached to and/or integrated with stationary exercise equipment to provide stereoscopic visuals and head-motion parallax that enhance the exercise experience. Additionally, a holographic display that is integrated with the exercise equipment is less nausea-inducing and less cumbersome than a virtual or augmented reality head-mounted display.


Stationary exercise equipment, even those with displays, provides a less immersive and engaging experience than outdoor or track exercise. As a result of their 2D displays, they can only convey depth and motion in a limited capacity through monoscopic perspective (e.g. using monoscopic video that displays distant virtual objects smaller than nearby virtual objects). These systems are unable to provide stereoscopic visuals or head-motion parallax required for rich sensory stimulation and convincing perception of motion.


Embodiments of the disclosure use an integrated and responsive 3D or holographic display attached to and/or integrated with stationary exercise equipment to create a more immersive, engaging and healthy stationary exercise experience. The holographic display provides a more stimulating sensory experience and can better simulate the perception of motion through a 3D virtual environment.


Also, embodiments of the disclosure functions as a software content delivery system. The software content (e.g. software applications) delivered through exercise equipment can utilize the sensors, displays, cameras and exercise equipment in different ways to create different exercise experiences. For example, one software application might virtually simulate cycling the Tour de France with other live participants connected via the internet in an immersive 3D scene generated by photogrammetry depth capture. Another application might simulate running through a virtual forest realistically rendered from a game engine. Yet another might simulate a fitness class with a personal trainer appearing as an animated avatar in a virtual gym. Other content might include holographic music visualization or computational art. Any application can leverage the array of display(s), speaker(s), sensor(s), and exercise equipment in different ways to create unique 3D and holographic experiences.


The exercise equipment according to the disclosed embodiments can be stationary exercise equipment (e.g. stationary bike, treadmill, elliptical, rowing machine, etc.) with electronic sensors to detect force exerted by user and/or capture other information relating to the user's movement when using the exercise equipment.


The exercise equipment disclosed herein can include 3D or holographic display (e.g., those that can utilize lenticular display technology, light-field display technology or projector-based display technology), positional-tracking camera system (for face-tracking) to record position of the user's eyes to adapt imagery from 3D or holographic display and better simulate depth perception through a 3D virtual environment.


The exercise equipment disclosed herein can additionally include computer hardware/software such as one or more processors to interpret sensor and camera data and output graphical content through the display.


Embodiments of the exercise equipment can display virtual content through the 3D or holographic display.


In another aspect of the disclosure, a method of connecting to the internet to stream updates or access content to an exercise apparatus is disclosed. The connection can be mediated through a connection to a mobile phone or can connect directly to WiFi or cellular networks.



FIG. 1 illustrates a stationary exercise bike 100 with one or more electronic sensors 101 that can detect force exerted by the user when the user exercises on the stationary exercise bike 100. The sensor(s) can transmit data to a computer processor 104 of the stationary exercise bike 100. The computer processor 104 can interpret the sensor data and generate imagery that can be displayed through a 3D or holographic display 102 to simulate motion through a virtual environment as the user exercises.


The stationary exercise bike 100 can incorporate one or more sensors 101 to detect force exerted by the user and capture other data on how the user is using the stationary exercise bike 100. The sensors for detecting force and other data vary depending on the type of exercise equipment and may include potentiometers, gyroscopes and accelerometers as well as optical sensors like cameras. In some embodiments, the one or more sensors 101 can also include pressure sensors, rotation sensors, position sensors, cadence sensors, vibration sensors, etc.


In the embodiment illustrated in FIG. 1, the one or more sensors 101 can be embedded in, for example, the pedals, the handle grips, the seat bottom, and/or any other locations of the stationary exercise bike 100 that would allow the sensors 101 to capture data reflecting the how the user is using the stationary exercise bike 100.


For example, force sensors 101 in the pedals can detect the amount of force exerted by the user when pedaling. Alternatively or additionally, a cadence sensor 107 can be attached to the stationary bike's crank arm to measure the real time cadence when the stationary bike 100 is in use. Alternatively or additionally, one or more vibration sensors and/or accelerometers 109 can be attached or embedded in the frames of the stationary bike 100 to detect the vibration and/or tilting of the stationary bike 100 when in use. It should be understood that the exemplary sensors 101, 107, 109 shown in FIG. 1 can be positioned at locations other than those shown in FIG. 1. It should also be understood that not all illustrated sensors may be included and that additional sensors not shown in FIG. 1 can be added to the stationary bike 100 to capture additional data on how the user is using the stationary bike 100.


Additionally, one or more sensors 110 may be integrated with the display 102 to detect the position and orientation of the display 102. These may be used to adapt on-screen content, for example, so that the horizon line displayed on-screen matches the real-world horizon line regardless of the angle and orientation of the display. Sensor 110 can be an accelerometer or a position sensor.


The processor 104 can receive signal(s) from the one or more sensors 101 and determine based on the signal(s), for example, the real-time speed at which the user is pedaling the bike and adjust the 3D or holographic environment being displayed on the display 102 accordingly to have the proper simulation of the user biking through the simulated environment.


The stationary bike of FIG. 1 can also include a face-tracking camera 103. The face-tracking camera 103 is used to record the position of the user's eyes and the direction of their gaze. As illustrated in FIG. 1, the camera 103 can be attached or positioned facing the user at the top of the display screen 102 of the stationary bike 100. It should be understood that, depending on the type and specification of the camera used, the camera 103 can be positioned in any place on the stationary bike that would allow it to accurately track and record the user's eye movements. In some embodiments, multiple cameras can be used to provide a wider area of coverage.


The camera 103 can send in real time a signal embedding the information it captured to the processor 104. The processor can then process the signal from the camera to determine imagery output through the 3D or holographic display 102 that can adapt and accurately simulate depth-perception and motion through virtual space. As a result, while the user exerts force on the stationary exercise bike, the imagery displayed through the 3-dimensional or holographic display can respond to simulate motion through virtual space.


The 3D or holographic display 102 can be any existing display capable of providing content in a way that provides the user an immersive experience while using the stationary bike 100 without requiring the user to wear any virtual reality (VR) headset. For example, the 3-dimensional or holographic display 102 may utilize a lenticular display paired with the face-tracking camera 103 to detect the user's eye position. The lenticular display can rapidly output image frames directed alternately at the user's right and left eye to simulate stereoscopic depth perception. The imagery adapts to the position of the user's eyes to simulate visual parallax. Alternatively, the 3-dimensional or holographic display 102 can be a light-field, which directs photons along the proper vector to simulate their trajectory from a virtual scene and create the perception of depth. Lastly, holographic projectors provide another alternate method of simulating depth perception by tracking the user's eye position with the camera 103 and then projecting different images into each of the user's eyes to create a stereoscopic view of a virtual scene. The embodiments of the disclosure can use any of these methods or a combination of them.


The virtual content output through the 3-dimensional or holographic display 102 can respond to the sensor data input from the one or more sensors 101 of the stationary exercise bike 100. Conversely, the stationary exercise bike can adapt to virtual content. For example, to simulate a virtual hill, the resistance of the pedals of a stationary bicycle might increase to give the user a sense of pedaling up an actual hill. Additionally, the virtual content can adapt and respond to the positional-tracking cameras 103 to better simulate depth-perception. As well, the face-tracking cameras 103 might be used to better understand how the user is perceptually, emotionally, physiologically and psychologically experiencing their exercise. The virtual content can adapt to this information to optimize and customize the exercise experience for each individual user. For example, the camera(s) 103 might detect infrared light output from the user's body to infer heart-rate and blood-flow and adjust the exercise intensity to maintain a constant, optimum heart-rate. Additionally or alternatively, the exercise equipment might use a combination of cameras 103 with one or more electrodermal sensors 112 (positioned in the handlebars, for example) to detect perspiration and infer hydration levels to then prompt the user to drink liquid when needed. An array of sensors including cameras (e.g., camera 103), cadence and resistance sensors and electrodermal sensors (e.g., sensor 112) might track and interpret subtle variations in perspiration, heart-rate, exertion, eye-movement, facial expression, exercise technique, etc. to infer when the user is experiencing a peak rush of euphoria (known as “runner's high”) while exercising and synchronize visual and audio content to enhance euphoria, for example, by displaying more exciting imagery and louder music that match the rhythm of the user's heart-rate or exercise cadence. The audio content can be stored locally in a storage of the stationary exercise bike 101 or streamed from a remote source (e.g., a cloud server). The audio content can be synchronized with the visual content by the processor 104 in response to data captured by the one or more sensors 101, 107, 109, 110, 112, and camera(s) 103. One or more speakers 120 located at different locations on the stationary exercise bike 101 can output the audio content with the intended effects and/or volume.


Although the 3-dimensional or holographic display 102 is shown to have a flat display surface, it should be understood that the display 102 can have a surface of any type and any curvature. It should also be understood that the 3-dimensional or holographic display 102 can include multiple screens that combine to create the immersive visual experience for the user. It should also be understood that the display 102 can be of any size and shape.


As illustrated in FIGS. 12a and 12b, the 3-dimensional or holographic display does not need to be physically attached to or integrated into the exercise apparatus. In some embodiments, as shown in FIGS. 12a and 12b, the 3-dimensional or holographic display can be connected to the exercise apparatus via a wired or wireless connection. This allows existing exercise apparatuses to be retrofitted with 3-dimensional or holographic display to provide the same immersive experience for the user.


Referring back to FIG. 1, the computer processor 104 can be any computer processor capable of processing signals from the camera 103 and the one or more sensors 101, 107, 109 of the stationary bike 100. Although the processor 104 is illustrated to be located in a housing behind the 3-dimensional or holographic display 102 in FIG. 1, it should be understood that the processor can be located at any part of the stationary bike 100, for example, the processor might be positioned near to the flywheel to leverage airflow for enhanced thermal dissipation to accommodate the intense heat resulting from the heavy graphical processing required to display holographic content.


Content to be shown on the 3D or holographic display 102 and software or firmware updates can be streamed or downloaded over the internet. For example, the user can select a stage of the Tour de France to be rendered by the 3D or holographic display 102 while using the bike 100 to simulate competing in the race. Specifically, the 3D or holographic display 102 can display pre-existing (e.g., downloaded) continuous footage of a Tour de France race captured using an omnidirectional, stereoscopic camera on a vehicle (e.g., a bicycle or a car). The footage can be shown at a pace that corresponds to the user's pace on the stationary bike 100 and from different angles that change in response to the user's eye movements captured by the face tracking camera 103.



FIG. 2 illustrates a treadmill 200 capable of providing virtual and immersive content for the user, according to another embodiment of the disclosure. Similar to the stationary bike of FIG. 1, the treadmill 200 can also include one or more sensors 202, 204, a 3-dimensional or holographic display 206, a facial-tracking camera 208 located at the top of the 3-dimensional or holographic display 206, and a computer processor 210 in communication with the sensors 202, 204, 3-dimensional or holographic display 206, and camera 208.


Sensor 202 can be a sensor placed under the belt of the treadmill to detect the force, timing, and/or location of the contact made by the user's feet. Sensor 202 can be multiple sensors placed at different locations under the belt. Additionally or alternatively, sensors 204 can be placed on areas on the top handlebars of the treadmill to detect any force from the user gripping the handle bars. Data detected by the sensors 202, 204 can be transmitted to the computer processor 210 of the treadmill 200.


The 3-dimensional or holographic display 206 and the face-tracking camera 208 of the treadmill 200 can be similar to the 3-dimensional or holographic display 102 and face-tracking camera 103 of the stationary bike 100 of FIG. 1, respectively. Data captured from the camera 208 can be transmitted to the processor 210.


The processor 210 can process data received from the sensors 202, 204 and the camera 208 and the settings (e.g., degree of incline, speed setting) of the treadmill to determine the user's pace, lateral movement, head/eye movement, etc. when the user is using the treadmill 200. The processor 210 can then display on the display 206 a 3-dimensional or holographic imageries that simulate an immersive visual experience (e.g., running through a forest or on a race track against other runners) for the user. The imageries can be a video being streamed in real time or content pre-downloaded from a remote server such as a cloud server.


The treadmill 200 can optionally include additional sensors not shown in FIG. 2 and speakers for outputting audio content synchronized with the visual content displayed on the display 206 based on data captured by the sensors.


It should be understood that both the stationary exercise bike 100 of FIG. 1 and the treadmill 200 of FIG. 2 are examples of the present disclosure. In other embodiments, other types of exercise equipment such as rowing machines, ellipticals, ski exercise machines, boxing machines can also incorporate sensors, face-tracking cameras, and 3-dimensional or holographic displays in a similar fashion as described above to provide similar immersive experience for the users.



FIG. 3 illustrates the exemplary system components of an exercise apparatus 600 such as the stationary bike 100 of FIG. 1 or the treadmill 200 of FIG. 2, according to an embodiment of the disclosure. The system components can include a memory 602, a processor 604, a storage 606, an input/output (I/O) interface 608, and a communication interface 610.


Processor 604 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 604 may be configured to receive data and/or signals from sensors, camera, other types of user interface such as a keypad or a touch screen, and/or other devices on the network and process the user input and received data and/or signals to determine the settings of the exercise apparatus including what content to be provided via the 3-dimensional or holographic display and how the content is provided.


Processor 604 may execute computer instructions (program codes) stored in memory 602 and/or storage 606, and may perform functions in accordance with exemplary techniques described in this disclosure. Memory 602 and storage 606 may include any appropriate type of mass storage provided to store any type of information that processor 604 may need to operate. Memory 602 and storage 606 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM. Memory 602 and/or storage 606 may be configured to store one or more computer programs that may be executed by processor 604 to perform exemplary functions disclosed in this disclosure including generating content for displaying on the 3-dimensional or holographic display of the exercise apparatus. For example, memory 602 and/or storage 606 may be configured to store program(s) that may be executed by processor 604 to determine the speed at which the content is being played on the display based on the pace of the user running on the treadmill. The program(s) may also be executed by processor 604 to provide an interface for interacting with a user.


Memory 602 and/or storage 606 may be further configured to store information and data used by processor 604. Memory 602 and/or storage 606 may be configured to store real-time streaming or pre-downloaded video content and/or software updates to the exercise machine.


Referring again to FIG. 3, communication interface 610 may be configured to facilitate the communication between the exercise apparatus 600 and other devices on the network. The communication interface 610 can be configured to transmit and receive signals/data via wireless network connections. For example, the exercise apparatus 600 can communicate via the communication interface 610 with a remote server to download content or updates to the various programs running on the exercise apparatus 600. Other known communication methods, which provide a medium for transmitting data are also contemplated. The communication interface 610 can additionally support wired communications as well.


I/O interface 608 can allow the exercise apparatus 600 to interact with a user. For example, the I/O interface 608 can be a touch screen that displays an interactive screen for the programs (or apps) running on the exercise apparatus 600. The touch screen can also receive touch or gesture input from a user. Any other conventional I/O interface can also be incorporated into the apparatus.


In another aspect of the disclosure, a method of providing an immersive experience to a user of an exercise machine is provided. The method can be performed by software, firmware, hardware, and/or a combination thereof. The software and firmware can be stored in a local storage or hosted on a remote server connected to the exercise equipment.



FIG. 4 illustrates the exemplary steps in the method of providing an immersive experience to the user of an exercise equipment. First, the processor of exercise equipment receives user input to start an exercise session (step 401). The user input may include specific settings of the equipment (e.g., speed of a treadmill, time duration for the session) and a selection of a particular virtual environment to be shown on the display of the exercise equipment. The virtual environment can simulate a real race (e.g., the Tour de France race), a particular environment (e.g., beach, forest, or gym), or a computer generated virtual environment such as those seen in computer games.


Next, the processor initiates the exercise machine based on the user inputs (step 402). As the user starts exercising on the exercise machine, the processor can receive real-time data from the one or more sensors capturing the movement, force, and other data associated with the user's action (step 403). Similarly, the processor can also receive information from the face-tracking camera that tracks, for example, the user's eye movement (step 404). The processor can analyze the sensor data in combination with the camera data to display or simulate a virtual environment via the 3D or holographic display of the exercise machine for the user (step 405). The processor can optionally continue to make real-time adjustments to the virtual environment based on the data received from the sensors and the camera (step 406). Additionally or alternatively, the processor can generate feedback (e.g., changing the incline on the treadmill) by adjusting the settings of the exercise equipment based on the virtual environment being shown to the user (step 407). When the user finishes his/her exercise, the processor can turn off the display (step 408).


In yet another aspect of the disclosure, embodiments of exercise equipment can also function as a software content delivery system. The software content (i.e. software applications) delivered through the exercise equipment can utilize the sensors, displays, cameras and exercise equipment in different ways to create different exercise experiences. For example, one software application might virtually simulate cycling the Tour de France with other live participants connected via the internet. Another application might simulate running through the woods. Any application can leverage the array of display(s), sensors, and exercise equipment in different ways to create unique experiences for the user.


In another aspect of the disclosure, an exercise apparatus with an integrated performance verification system is disclosed.


Current athletic esports platforms such as virtual cycling and rowing races are vulnerable to cheating, especially when participants connect remotely, and it is not possible to verify their physical performance through direct human observation. Dishonest participants can cheat using methods such as interfering with exertion sensors in exercise equipment, attaching external motors that turn gears of exercise equipment to simulate athletic exertion, and hacking various layers of the software systems that esports platforms rely on. High numbers of remote participants in massively multiplayer environments can further exacerbate the challenge of verifying individual performance. As well, some exercise equipment categories like treadmills do not have integrated mechanisms that respond to physical exertion, such as cadence and wattage sensors common in cycling and rowing, instead estimating athletic output from the speed of the motorized tread, presenting additional challenges with performance verification. As a result, much performance data goes unverified or not thoroughly verified, particularly in casual multiplayer settings, allowing dishonest participants to outpace other players, score points, earn rewards and even qualify for competitive events.


Embodiments of the disclosure utilize computer vision systems integrated with exercise equipment alongside common exercise exertion sensors like cadence and wattage gauges to verify that the recorded physical exertion output aligns with biometric signals simultaneously registered by the computer vision system. The camera sensors and computer vision systems used to measure biometric signals for the purposes of performance verification and cheating prevention can be the same as or used in concert with those used in a stereoscopic display attached to the exercise equipment such as the stationary bike 100 of FIG. 1 and the treadmill 200 of FIG. 2.



FIG. 5 illustrates the exemplary components of a system for verifying performance of a user of an exercise apparatus 500 such as the stationary exercise bike 100 of FIG. 1 and the treadmill 200 of FIG. 2. The exercise apparatus 500 can include a computer vision system 504 consisting of one or more camera sensors 505 and the associated image recognition software. The computer vision system 504 can measure biometric signals captured by the one or more camera sensors 505 from the user of the exercise apparatus 500. In one embodiment, the computer vision system 504 can include one or more functions such as face and body feature detection, motion detection, time-of-flight sensing and stereoscopic triangulation to perceive depth, and skin light emission detection. In some embodiments, multiple different computer vision systems can be used separately or together to measure the various biometric signals. In some embodiments, relevant biometric signals that can be tracked by the computer vision system(s) 504 include one or more of body motion patterns, heart-rate, cardiovascular circulation, perspiration detection, blood-oxygen levels, and more.


For example, one approach to verify whether a person is actually exerting the effort that the ergometer is reading would be to analyze camera footage of a person while they use exercise equipment like a rowing machine and determining whether their body motion during each rowing stroke aligns with the expected body motion to exert the force reported by the ergometer during the rowing stroke. The expected body motion in this case would be predetermined by analyzing a large sampling of camera footage of people performing similar exercise. Another approach to measure other biometric signals such as heart-rate, circulation and blood-oxygen saturation involves optical sensors (i.e. cameras) and illuminators using photoplethysmography. These optical sensors can be placed on the exercise apparatus so as to allow for close skin contact such as with handlebars. Alternatively, the camera sensor used can be the same as the one used to detect body motion and would be placed further from the user for non-contact photoplethysmography.


The exercise apparatus 500 can optionally include additional sensors such as exercise exertion sensors like cadence sensor 506 and wattage gauge 508. Other types of sensors such as pressure sensors, rotation sensors, position sensors, vibration sensors (not shown in FIG. 5) can also be incorporated into the exercise apparatus as described above with reference to FIGS. 1 and 2. Data captured by these additional sensors (e.g., cadence sensor 506 and wattage gauge 508) can supplement the data (e.g., biometric signals) captured by the camera sensors 505 for use in verifying the performance of the user of the exercise apparatus 500.


The exercise apparatus can also include a processor 510 in communication with the computer vision system 504 and the optional additional sensors 506, 508. The processor 510 can receive biometric signals and other data from the computer vision system 504. The processor 510 can also receive signals from the additional sensors 506, 508. The processor 510 can verify whether the physical performance level indicated by one or more of the biometric signals aligns with the physical performance level simultaneously recorded by the other sensors (e.g., exertion sensors 506, 508) of the exercise apparatus 500.


For example, the processor can determine whether the body motion patterns and the amount of perspiration detected are consistent with the same performance level (e.g., pedaling/running speed). If both the body motion patterns captured by the computer vision system 504 and the signals received from the exercise exertion sensors 506, 508 indicate that the user is pedaling or running at about the same speed, the user's performance is verified. In contrast, if the camera data (e.g., body motion patterns) shows that the user is running at a slower pace than the data from the exercise exertion sensors 506, 508 indicates, the processor 510 can determine that the data from at least some of the exercise exertions sensors 506, 508 may not be accurate in representing the user's actual performance on the exercise apparatus. This can provide a mechanism to verify user performance in an esport competition taking place remotely on participants' own exercise apparatus.


In some embodiments, the processor 510 may not be on the exercise apparatus 500 but instead be on a remote computer (e.g., a cloud processor) that is connected to the exercise apparatus via a network such as the Internet. Sensor data from the computer vision system 504 and the other sensors 506, 508 can be transmitted via the network to the remote computer for processing.


In some embodiments, when measured together by the computer vision system 504, the processor can cross-check multiple biometric signals with each other as well as the output of the exertion sensor(s), creating a stronger verification system that is more difficult to dupe. For example, in the case of an activity like rowing where stroke form contributes to overall output, analyzing body motion can help verify that the user's stroke rate and technique match the exertion readout from the ergometer. However, body motion analysis alone may not be sufficient to prevent cheating. Analyzing heart-rate alongside body motion can further verify that the user's cadence output aligns with the ergometer readout, helping to ensure they are not artificially amplifying their stroke force. These separate methods together can magnify the cumulative effectiveness more than the sum of their parts as it becomes much more difficult to dupe multiple systems simultaneously.


In some embodiments, it is also possible for the processor 510 to bypass the process of inferring biometric measures like heart-rate altogether and instead only analyze variations in the raw biometric signals emitted by the user as detected by the camera sensors. For example, using this method to correlate exertion with skin light absorption is more direct than correlating exertion with the inferred heart-rate, which is an extrapolation and abstraction of the observed biometric signal of skin light absorption. Such systems can be facilitated by neural network models trained on data sets collected from numerous users in a variety of settings and conditions.


In some contexts, such as casual non-competitive exercise, simpler computer vision systems that track fewer biometric signals might be sufficient to verify performance whereas other more stringent use-cases such as esports competitions might require more robust verification that relies on a variety of biometric signals and computer vision systems. In some embodiments, the exercise apparatus can have different performance verification settings that can be set either by the user or by a remote computer based on the context of an exercise session. For example, for a formal esports competition, each of the participating exercise apparatus can be connected to a central computer over a network. The central computer can set the performance verification setting on each exercise apparatus to the highest level, enabling all verification mechanisms available on the apparatus. The central computer can also lock the setting so the user cannot override it when in competition. In contrast, for a casual training session, the performance verification setting can be turned off.


In some embodiments, analyzing data of simultaneous output from the exertion sensor and optical sensors (e.g., cameras) from numerous different users and their exercise sessions can establish a baseline correlation that can then be used to verify an individual user's physical performance. The aggregate session data can also be compared to details about a particular user such as height, weight, and age as well as data collected from previous exercise sessions data to establish an individual baseline specific to that user.



FIG. 6 illustrates a network of exercise apparatus 602, 604, 606 connected to a central computer 600 over a network 608. Each of the exercise apparatus 602, 604, 606 can transmit user profile and performance data to the central computer 600. In this embodiment, the central computer 600 can analyze and establish a performance baseline for each user. The performance baseline can be compared to the user's performance during an exercise session to detect any suspicious performance level (e.g., the user is riding at a 30% faster pace than his performance baseline). Similarly, the central computer 600 can aggregate session data from multiple users having similar user profiles (e.g., age, height, weight) and establish a common performance baseline for the group of users and use the common performance baseline in performance verification during competition.


Referring again to FIG. 5, in addition to affirming the veracity of readings from the primary exertion sensor on an exercise apparatus, the computer vision system 504 of FIG. 5 can also be used to detect anomalous activity that appears suspicious or potentially violates an esports platform's rules. Skin light emission, body motion, or other biometric signals that deviate too much from normal behavior can be flagged as suspicious even if they do not definitively indicate cheating. For example, if the computer vision systems 504 (e.g., the camera) detect multiple people in proximity to the exercise equipment for the duration of a ride, it might flag this behavior as suspicious.



FIG. 7 is a flow chart illustrating the exemplary steps in a method of verifying performance of a user of an exercise apparatus. First, image data of the user on the exercise apparatus are captured by camera sensor(s) during an exercise session. (Step 701) Biometric signals from the image data are extracted by the computer vision system. (Step 702) The biometric signals can include one or more of body motion patterns, heart-rate, cardiovascular circulation, perspiration detection, blood oxygen levels, and more. The biometric signals are then analyzed by the computer vision system to determine a first set of user performance data. (Step 703) The performance level can, for example, be the speed at which the user is running or cycling. In some embodiments, the process of inferring some biometric measures like heart rate can be bypassed altogether in this step and instead only variations in the raw biometric signals emitted by the user as detected by the camera sensors are analyzed. Optionally, the user performance data extracted from different biometric signals can be cross checked to ensure there is no anomaly in the data. (Step 704)


Data from one or more other sensors (e.g., exercise exertion sensors) on the exercise apparatus can be processed by a processor to determine the veracity of a second set of user performance data. (Step 705) The first and second sets of performance data are synchronized and compared to determine if there are significant inconsistencies between the data set. (Step 706) Any significant inconsistencies can trigger an alert about that user's performance (Step 707) that can be handled by the system differently depending on the context and programming. For example, in the context of a competitive race, the alert can be redirected to a supervising authority such as a human referee that can review and determine if there has been a rule violation based on the evidence provided by the system. It should be understood that some of the steps illustrated in FIG. 7 can be performed simultaneously or in a different order than illustrated.


In yet another aspect of the disclosure, systems and methods of enhancing the perception of motion in athletic simulations are disclosed. As discussed in the embodiments above, digital simulations of cardiovascular sport activities such as cycling, running, and rowing can help alleviate the monotony of indoor stationary exercise by visualizing the user moving through virtual environments on screens attached to exercise equipment such as stationary bikes, treadmills, and rowing machines. Conveying a convincing sense of speed and motion on-screen is critical to simulating the enjoyable experience of training in the real-world.


However, movement through virtual worlds displayed on a conventional screen can feel slower than it would in the real-world even when the simulated pace in the virtual scene accurately matches what would be experienced in the real-world based on the output exertion of the user. For example, cycling at a rate of 20 miles per hour is a rigorous pace and feels very fast in the real-world but simulating that pace of movement through virtual space on a conventional screen can feel much slower. This can make exercising with these simulators less engaging. Several factors contribute to this difference in the perception of speed. The real-world visual sense of speed is influenced by the wider field-of-view, peripheral vision, and stereoscopic depth perception of the person in motion relative to their environment.


While head-mounted displays or immersive dome displays can create an accurate visual sense of speed by providing wider field-of-view and peripheral vision, these methods have an increased risk of inducing nausea and motion sickness. As will be discussed in detail below, one method to alleviate this is to artificially restrict field of view when the user is moving even slowly, creating a tunnel vision effect and to simulate an amplified sense of motion within a more limited field of view.


In some embodiments of the disclosure, systems of enhancing the perception of motion in athletic simulations (e.g., using stationary exercise apparatuses) are disclosed. To enhance the perception of motion, the systems can produce responsive visual effects applied to the viewport of a stereoscopic display attached to exercise equipment such as stationary bikes, treadmills, and rowing machines. The motion perception amplification results from two visual effect techniques, the viewport refraction system and viewport particle system, that each can dynamically respond to the user's physical exertion and body movement measured by sensors on the exercise equipment.


Viewport Refraction System

The viewport refraction system enhances the perception of motion by exaggerating the field of view rendered by the stereoscopic display. This expanded view allows users to see more of the objects and terrain close around them, such as the ground just in front of them or the trees along their periphery. While the user moves through the virtual environment, these nearfield visual markers appear to move by relatively faster than objects far away. By making nearby objects and terrain more visible to the user, the viewport refraction effect can accentuate the user's sense that they are moving fast through virtual space.


In one variation, the intensity of the refraction effect responds to the velocity of the user in virtual space and their exerted effort as recorded by the ergometer on the exercise equipment so that when they exert more effort and move faster through virtual space, they see a wider field of view. This makes changes in virtual velocity even more pronounced and makes exerted effort feel more gratifying.



FIG. 8a illustrates the exemplary components of a system for generating enhanced graphics for providing an immersive experience while using an exercise apparatus such as the stationary bike 100 of FIG. 1 and the treadmill 200 of FIG. 2. The system 800 can include a signal receiving module 802 in communication with one of more sensors 803, 804 of the exercise apparatus. The one or more sensors can include, for example, an ergometer for capturing an exerted effort of the user of the exercise apparatus. The signal receiving module 802 can receive signals from the one or more sensors 803, 804. For example, the signal receiving module 802 can receive from the ergometer a signal comprising information on the exerted effort of the user.


The system can include a velocity determining module 806 in communication with the signal receiving module 802. The signal receiving module 802 can pass the information in the signals from the one or more sensors 803, 804 to the velocity determining module 806. The velocity determining module 806 can determine based on the information (e.g., force captured by the ergometer) a velocity (or pace) at which the user is pedaling, running, or rowing on the exercise apparatus.


The system can further include a graphic enhancing module 808 in communication with the velocity determining module 806. The graphic enhancing module 808 can receive the velocity from the velocity determining module 806 and modify the graphics based on the velocity. In one embodiment, the graphic enhancing module 808 can add the fraction effect discussed above. That is, while the user moves through the virtual environment as he or she engages in an exercise on the exercise apparatus (e.g., the stationary bike 100 of FIG. 1 and the treadmill 200 of FIG. 2), the nearfield visual markers in the graphics being displayed appear to move by relatively faster than objects far away. The graphic enhancing module 808 can adjust the intensity of the refraction effect in response to the velocity of the user in virtual space as determined from the signals received from the sensors on the exercise apparatus. For example, when the user's exerted effort as recorded by the ergometer increases, the intensity of the refraction effect can also increase to provide a more realistic graphic effects for the user.


One of the increased effects can be a wider field of view being provided on the display. To achieve this expanded field of view in the virtual viewport of the stereoscopic display, the graphic enhancing module 808 can have the left and right eye image warped separately according to the same curvature resulting in a visual effect similar to the physical phenomenon of a convex lens refracting light so that the resulting image seen through the lens has a wider field of view. Additionally, the graphic enhancing module 808 can be programmed with the capability to differentiate between components of the virtual scene that should be warped or unwarped. For example, graphic interface elements, text, particles and certain objects might not be intended to be warped whereas the landscape and terrain must be warped to achieve the expanded field of view. One method to achieve this uses multi-pass rendering to differentiate between components of the virtual scene that should appear warped or unwarped. The warped and unwarped passes are rendered separately and then composited together by the graphic enhancing module 808. Another method might use a single render pass for all components in a scene that could differentiate between warped and unwarped objects depending on their placement in the virtual scene. For example, objects that appear in front of the stereoscopic display panel might be unwarped, whereas those that appear behind the display panel are warped.


In addition to expanding the field of view, the graphic enhancing module 808 can also amplify the perception of depth to enhance the sense of speed depending on how the refraction distortion is applied across the rendered image.


Referring back to FIG. 8a, the graphic enhancing module 808 can be in communication with a display 810. The display 810 can, for example, be the 3D or holographic displays 102, 104 of FIGS. 1 and 2, respectively. The graphic enhancing module 808 can transmit the enhanced graphics to the display 810 to be displayed in real time as the user exercises on the exercise apparatus.



FIG. 8b illustrates the exemplary steps in a method of generating enhanced graphics to be displayed in real time as the user exercises on an exercise apparatus, according to an embodiment of the disclosure. First, an exerted effort of the user of the exercise apparatus is captured. (Step 850) A velocity (or pace) at which the user is pedaling, running, or rowing on the exercise apparatus can be determined based on the user's effort on the exercise apparatus. (Step 851) The graphics to be displayed to the user can be modified to produce a refraction effect (or other effects) by a graphic enhancing module based on the velocity. (Step 852) Optionally, the intensity of the effect (e.g., refraction effect) can be adjusted by the graphic enhancing module in response to the velocity of the user in virtual space as determined from the signals received from the sensors on the exercise apparatus. (Step 853) The enhanced/adjusted graphics can be transmitted to a display to be displayed in real time as the user exercises on the exercise apparatus. (Step 854) It should be understood that some of the steps in the method of FIG. 8b can be performed simultaneously or in a different order.


Viewport Particles

In some embodiments, zoom particles are another effect that can enhance the perception of motion by populating the user's viewport with small moving points, streaks and artifacts that respond to the user's physical exertion and virtual velocity. The particle trajectory flows nearby and through the viewport in virtual space so that they appear to move by the user faster than objects and terrain that are farther away, creating an even more prominent motion parallax effect. These particles originate in front of the viewport and move toward the viewport along the user's motion trajectory in virtual space. The particle trajectory can respond to information from the eye, head and body tracking camera sensors attached to the exercise equipment, such that they move closely by the user's eyes, head and body without colliding. When used with a stereoscopic display, this is critical to ensure the particles avoid intersecting with the user's eyes and head which can induce an uncomfortable sensation that causes the user to flinch. This enables the particles to appear as close as possible to the user, maximizing the perception of speed without causing discomfort.



FIG. 9 provides a 2-dimensional illustration of the zoom effect on a 3D or holographic display. Particles (collectively 900) of the same or various sizes can be projected to “zoom” in predetermined trajectories to enhance the 3D or holographic effects of the virtual environment being displayed.


According to the embodiments, the intensity of the zoom particles is determined by several parameters that can respond dynamically to the readings from the ergometer and other sensors on the exercise equipment as well as other isolated events and factors in the virtual environment to create engaging visual feedback that further accentuates the user's perception of motion. These parameters can include but are not limited to the number of particles, their spawn rate, the duration that they remain visible, as well as their trajectory, velocity, speed, color, opacity, etc. Together these parameters determine the overall intensity of the zoom particle effect. Each can respond differently to user input and environmental factors. For example, particle velocity might correspond to the simulated velocity of the user while other parameters like particle size, color and opacity might be determined more by the current wattage or cadence readout from the ergometer. Since virtual velocity is determined by environmental factors such as virtual inclination, wattage and cadence are more direct measures of exerted effort, resulting in visual feedback from particles that feels more responsive to user input. Combining parameters that respond to velocity with those that respond to wattage, cadence and other factors can create a balanced effect that is optimized to enhance both responsiveness to input and the perception of speed.


These zoom particles are especially effective at enhancing the perception of motion because their velocity and intensity does not have to be physically accurate and can be independent from the movement of the rest of the virtual environment relative to the user. While it is beneficial for the purposes of accurate simulation that the user moves through virtual space at a pace that aligns with real-world physics based on their exertion even though this can feel slow, particles can be optimized to maximize the enjoyment and gratification of exercise regardless of virtual physics. For example, if the user is moving through the virtual world at 15 miles per hour based on their exertion, the particles can move by the user at a rate of 45 miles per hour to artificially amplify the user's perception of speed.



FIG. 10a illustrates the exemplary components of a system for generating enhanced graphics for providing an immersive experience while using an exercise apparatus such as the stationary bike 100 of FIG. 1 and the treadmill 200 of FIG. 2. The system 1000 can include a signal receiving module 1002 in communication with one of more sensors 1003, 1004 of the exercise apparatus. The one or more sensors 1003, 1004 can include, for example, an ergometer for capturing an exerted effort of the user of the exercise apparatus. The signal receiving module 1002 can receive signals from the one or more sensors 1003, 1004. For example, the signal receiving module 1002 can receive from the ergometer a signal comprising information on the exerted effort of the user. In one embodiment, the information can include the current wattage or cadence readout from the ergometer.


The system 1000 can include a velocity determining module 1006 in communication with the signal receiving module 1002. The signal receiving module 1002 can pass the information in the signals from the one or more sensors 1003, 1004 to the velocity determining module 1006. The velocity determining module 1006 can determine based on the information (e.g., force captured by the ergometer) a velocity (or pace) at which the user is pedaling, running, or rowing on the exercise apparatus.


The system 1000 can also include a zoom particle generating module 1008 in communication with the velocity determining module 1006. The zoom particle generating module 1008 can receive the velocity of the user from the velocity determining module 1006. The zoom particle generating module 1008 can determine one or more parameters of the zoom particles based on the velocity. In some embodiments, the zoom particle generating module 1008 can additionally consider other isolated events and factors in the virtual environment in determining the one or more parameters of the zoom particles. Such isolated events and factors can include, for example, virtual inclination and changes in exertion as measured by the ergometer, changes in heart rate of the user, and/or changes in background sounds and music in the virtual experience.


In various embodiments, the parameters can include but are not limited to the number of particles, their spawn rate, the duration that they remain visible, as well as their trajectory, velocity, speed, color, opacity, etc. Together these parameters determine the overall intensity of the zoom particle effect. The zoom particle generating module 1008 can set the parameters to different values based on user input and/or environmental factors. For example, the zoom particle generating module 1008 can set the particle velocity based on the simulated velocity of the user, which, in turn, can be determined based on the velocity provided by the velocity determining module 1006. The zoom particle generating module 1008 can set other parameters such as particle size, color and opacity of the zoom particles based on the current wattage or cadence readout from the ergometer. The zoom particle generating module 1008 can then combine the parameters that respond to velocity with those that respond to wattage, cadence and other factors to create a balanced effect that is optimized to enhance both responsiveness to input and the perception of speed.


Referring again to FIG. 10a, the system 1000 can additionally include a graphic enhancing module 1010 in communication with the zoom particle generating module 1008. The zoom particle generating module 1008 can transmit the parameters of the zoom particle to the graphic enhancing module 1010 to add the zoom particle effect to the graphics to be displayed on the display of the exercise apparatus. In particular, the graphic enhancing module 1010 can animate particles of certain sizes, velocities, colors, trajectories, etc. based on the parameters and incorporate these animated particles into the virtual environment.


The graphic enhancing module 1010 can be in communication with a display 1012 of the exercise apparatus. The display 1012 can display the graphics being provided by the graphic enhancing module 1010 including the particles to create the zoom particle effect to improve the immersive visual experience of the user of the exercise apparatus. The display 1012 can, for example, be the 3D or holographic displays 102, 104 of FIGS. 1 and 2, respectively. The graphic enhancing module 1010 can transmit the enhanced graphics to the display 1012 to be displayed in real time as the user exercises on the exercise apparatus.



FIG. 10b illustrates the exemplary steps in another method of generating enhanced graphics to be displayed in real time as the user exercises on an exercise apparatus, according to an embodiment of the disclosure. An exerted effort of the user of the exercise apparatus is captured by one or more sensors. (Step 1050) A velocity (or pace) at which the user is pedaling, running, or rowing on the exercise apparatus can be determined based on the user's effort on the exercise apparatus. (Step 1051) One or more parameters of the zoom particles can be determined by a zoom particle generating module based on the velocity. (Step 1052) Optionally, the parameters of the zoom particles can be set by the zoom particle generating module to different values based on user input and/or environmental factors. (Step 1053) The parameters of the zoom particle can be transmitted to a graphic enhancing module. (Step 1054) Particles of certain sizes, velocities, colors, trajectories, etc. can be animated by the graphic enhancing module based on the parameters and incorporated into the virtual environment. (Step 1055) Finally, the graphics enhanced with the zoom particle effect can be provided by the graphic enhancing module to a display to improve the immersive visual experience of the user of the exercise apparatus. (Step 1056) It should be understood that some of the steps in the method of FIG. 10b can be performed simultaneously or in a different order.


It should be understood that the system 800 of FIG. 8a that produces the viewport refraction effect and the system 1000 of FIG. 10a that produces the zoom particle effect can both be incorporated into the same exercise apparatus such as the stationary bike 100 of FIG. 1 and the treadmill 200 of FIG. 2. In some embodiments, some of the modules such as the velocity determining module can be shared between the systems.


Dynamic Feedback

The viewport refraction system and viewport particle system are complimentary, providing a range of visual cues across the entire depth field of the viewport, from the nearest field close to the user to the distant horizon. When integrated with an ergometer, tracking camera and other sensors they provide visceral and responsive feedback to users, making exercise more gratifying. In some embodiments, the systems 800, 1000 can be tuned to respond to input so as to guide the user's behavior with visual cues for an optimal exercise session. One method to achieve this involves modulating the intensity of the visual effects (e.g., viewport fraction, zoom particle effects) according to a non-linear correlation to exertion. For example, the graphic enhancing modules 808, 1010 of systems 800, 1000, respectively, can adjust the intensity of the motion perception amplification (e.g., effects of viewport fraction and/or zoom particle effects) in response to exertion according to a sigmoidal curve, such that the rate of change is more drastic within a target exertion range, making the ramp up to a target exertion threshold more gratifying and any decline below that threshold more pronounced. This allows users to viscerally feel whether they are getting closer to or deviating from the target threshold of exertion. Varying this target exertion threshold can be used to nudge the user to adjust their exercise intensity according to an optimal interval training pattern, such as high, medium, or low intensity interval training, depending on the user's individual needs or preferences.


Neural Entrainment

The dynamic feedback provided by the viewport refraction and viewport particle system can provide rhythmic stimuli that combine with other stimuli such as sound and haptic vibrations to induce neural entrainment in the user, a phenomenon wherein electrical oscillations in the brain naturally synchronize to external stimuli. Embodiments of the disclose incorporate this technique to elicit targeted mental states that correspond to specific electrical oscillations in the brain and can be optimal for exercise and wellness.


In one embodiment, targeting desired mental states can be achieved by training an artificial intelligence (AI) or deep learning model with data collected during exercise sessions wherein one or more cameras and/or sensors observe and record electrical signals from participant's brains or more indirect biometric signals such as eye-movement, heart rate and perspiration that correspond with such electrical brain activity and associated mental states.



FIG. 11a illustrates the exemplary components of a system 1100 for inducing neural entrainment in the user while using an exercise apparatus such as the stationary bike 100 of FIG. 1 and the treadmill 200 of FIG. 2. The system 1100 includes one or more cameras and/or sensors 1102. The sensors/cameras 1102 used for this analysis can include an electroencephalogram (EEG) attached to the participant 1104 being studied as well as other biometric sensors like pulse-oximeters, spirometers, eye-tracking cameras, etc. The sensors/cameras 1102 can observe and record electrical signals from participant's brains or more indirect biometric signals such as eye-movement, heart rate and perspiration that correspond with such electrical brain activity and associated mental states. Data recorded by the sensors/cameras 1102 can be stored in the data recorder 1106. The data from the cameras/sensors 1102 can be synchronized with various participant stimuli being generated by the viewport refraction and/or viewport particles system 1110. The viewport refraction/viewport particles system 1110 can be similar to systems 800, 1000 of FIGS. 8 and 10, respectively. Additionally or alternatively, other types of stimuli can be generated to stimulate the participant by the system 1110.


By recording such brain activity and biometric signals while generating and testing different variations and combinations of stimuli including viewport refraction and viewport particles as well as other visuals, sounds and haptics, an AI model 1108 in communication with the data recorder 1106 and the viewport refraction/viewport particles system 1110 (and other systems for generating stimuli) to learn which combinations of stimuli are associated with which electrical brain activities and biometric signals. The AI model 1108 can also receive self-reporting from the user on user's experience on the exercise apparatus. The AI model 1108 can associate the patterns of brain activity and biometrical signals with desired mental states that can be categorized as relaxed, focused, determined, etc. based on analysis of exercise performance, user self-reporting, or comparisons to known brain activity and biometric patterns. The model trained from this analysis is used to generate an exercise experience 1111 that adapts viewport refraction and/or viewport particle systems 1110 and, optionally, in conjunction with other stimuli generated by other stimulating system(s) 1112 in real-time to elicit desired mental states for an optimal workout.



FIG. 11b illustrates the exemplary steps in a method of inducing neural entrainment in the user of an exercise apparatus, according to an embodiment of the disclosure. First, electrical signals from participant's brains and/or other biometric signals such as eye-movement, heart rate and perspiration that correspond with such electrical brain activity and associated mental states are observed and recorded by camera(s) and/or sensor(s). (Step 1150) Data recorded by the sensors/cameras are stored in a data recorder (i.e., data storage). (Step 1151) The data is synchronized with various participant stimuli generated by one or more systems including but not limited to viewport refraction and/or viewport particles systems. (Step 1152) An AI model learns from the synchronized camera/sensor data and stimuli data which combinations of stimuli are associated with which electrical brain activities and biometric signals. (Step 1153) An exercise experience that adapts viewport refraction and/or viewport particle systems and, optionally, in conjunction with other stimuli generated by other stimulating system(s) can be generated in real-time using the AI model to elicit desired mental states for an optimal workout from the user of the exercise apparatus. (Step 1154) It should be understood that some of the steps in the method of FIG. 11b can be performed simultaneously or in a different order.


The various modules of FIGS. 8, 10, and 11 can be implemented in hardware, software, firmware, or a combination of any of the above. If implemented in software, the modules can be stored in Memory 602 and storage 606 of the exercise apparatus 600.


It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed systems and methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the systems and methods. It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims
  • 1. An exercise apparatus comprising: one or more sensors configured to capture user exercise data;a face-tracking camera configured to track the eye movements of a user of the exercise apparatus;a processor configured to receive the captured user exercise data from the one or more sensors,receive data on the eye movements of the user from the face-tracking camera, andgenerate a virtual environment based on the user exercise data and the eye movements of the user; anda display configured to display the virtual environment to the user.
  • 2. The exercise apparatus of claim 1, wherein the exercise apparatus is a stationary exercise bike and the virtual environment comprises a virtual race course.
  • 3. The exercise apparatus of claim 1, wherein the one or more sensors comprises one or more of potentiometers, gyroscopes, accelerometers, pressure sensors, rotation sensors, position sensors, cadence sensors, and vibration sensors.
  • 4. The exercise apparatus of claim 3, wherein the processor is further configured to track a movement of the user in response to the user exercise data captured by the one or more sensors.
  • 5. The exercise apparatus of claim 1, wherein the display is a 3-dimensional or holographic display.
  • 6. The exercise apparatus of claim 1, wherein the display comprises a lenticular display configured to be paired with the face-tracking camera to detect the user's eye positions.
  • 7. The exercise apparatus of claim 6, wherein the lenticular display is configured to rapidly output image frames directed alternately at the user's right and left eyes to simulate stereoscopic depth perception.
  • 8. The exercise apparatus of claim 7, wherein the display is further configured to display an imagery that adapts to the position of the user's eyes to simulate visual parallax.
  • 9. The exercise apparatus of claim 1, wherein the display comprises a light-field configured to direct photons along the proper vector to simulate their trajectory from a virtual scene and create a perception of depth.
  • 10. The exercise apparatus of claim 1, wherein the display comprises holographic projectors configured to simulate depth perception by tracking the user's eye position using the face-tracking camera and project different images into each of the user's eyes to create a stereoscopic view of a virtual scene.
  • 11. The exercise apparatus of claim 1, wherein one of the one of more sensors is configured to detect a position or an orientation of the display; and wherein the processor is further configured to adjust a horizon in the virtual environment in response to the position or orientation of the display.
  • 12. The exercise apparatus of claim 1, wherein the processor is further configured to download information from a remote server, the information comprising software updates or content for use in generating the virtual environment.
  • 13. The exercise apparatus of claim 1, wherein the exercise apparatus is a treadmill and the virtual environment comprises a simulation of a real world location.
  • 14. The exercise apparatus of claim 1, wherein the processor is further configured to adjust a setting of the exercise apparatus in response to the virtual environment being displayed on the display.
  • 15. The exercise apparatus of claim 14, wherein the setting of the exercise apparatus comprises a degree of incline and the degree of incline is adjusted in response to a displayed change of elevation in the virtual environment.
  • 16. A method of providing an immersive exercising experience to a user of an exercising apparatus, the exercising apparatus comprising one or more sensors, a face-tracking camera, a display, and a processor in communication with the one or more sensors, the face tracking camera, and the display, the method comprising: capturing, by the one or more sensors, user exercise data;tracking, by the face-tracking camera, the eye movements of a user of the exercise apparatus;receiving, by the processor, the captured user exercise data from the one or more sensors,receiving, by the processor, data on the eye movements of the user from the face-tracking camera, andgenerating, by the processor, a virtual environment based on the user exercise data and the eye movements of the user; anddisplaying, on the display, the virtual environment to the user.
  • 17. The method of claim 16, further comprises tracking a movement of the user in response to the user exercise data captured by the one or more sensors.
  • 18. The method of claim 16, wherein the display comprises holographic projectors, and wherein the method further comprises simulating, by the holographic projectors, depth perception by tracking the user's eye position using the face-tracking camera; and projecting different images into each of the user's eyes to create a stereoscopic view of a virtual scene.
  • 19. The method of claim 16, further comprising adjusting, by the processor, a setting of the exercise apparatus in response to the virtual environment being displayed on the display.
  • 20. The method of claim 16, further comprising downloading, by the processor, information from a remote server, the information comprising software updates or content for use in generating the virtual environment.
CROSS-REFERENCE

This application claims the priority of U.S. patent application Ser. No. 18/587,333, filed on Feb. 26, 2024, which claims the priority of U.S. patent application Ser. No. 17/334,441, filed on May 28, 2021, and issued as U.S. Pat. No. 11,908,476 on Feb. 20, 2024, which claims the priority of U.S. Provisional Application Ser. No. 63/118,149, filed on Nov. 25, 2020. The entirety of all three priority applications is hereby incorporated by reference.

Provisional Applications (1)
Number Date Country
63118149 Nov 2020 US
Continuation in Parts (2)
Number Date Country
Parent 18587333 Feb 2024 US
Child 18656571 US
Parent 17334441 May 2021 US
Child 18587333 US