This relates to an exercise apparatus and, more specifically, to an exercise machine with an integrated holographic display.
Current stationary exercise equipment like treadmills, ellipticals, rowing machines, stationary bikes and weight machines can be monotonous to use. While some come with integrated displays for entertainment while exercising, these displays fail to provide rich sensory immersion and the sense of motion that make outdoor exercise like running, rowing or cycling engaging to athletes.
Some existing stationary exercise equipment with 2-dimensional (2D) displays attempt to mitigate the monotony of stationary exercise with on-screen media, but they do not provide the stereoscopic visuals and head-motion parallax that are required to simulate depth perception and convincing motion through 3-dimensional (3D) space, making them less immersive and less engaging than their non-stationary counterparts for outdoor course or indoor track exercise. Stationary exercise is advantageous for several reasons including that it can be done from the comfort of one's home or gym, regardless of weather conditions outside. However, current stationary exercise equipment does not offer the immersive visuals and perception of motion that can increase enjoyment and engagement and enhance therapeutic mental health benefits during exercise. While other systems use virtual reality and augmented reality headsets or eyewear connected to exercise equipment to simulate motion, they are also cumbersome to wear and are prone to cause nausea or motion-sickness.
There is no existing exercise equipment that can provide an immersive experience without requiring the user to wear a headset.
Embodiments of the disclosure use an integrated and responsive 3D or holographic display attached to and/or integrated with stationary exercise equipment to create a more immersive, engaging and enjoyable stationary exercise experience. The 3D or holographic display provides a more stimulating sensory experience and can better simulate the perception of motion through a 3D virtual environment.
The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. These drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
The terms “exercise apparatus,” “exercise equipment,” and “exercise machine” are used interchangeably in this document.
The terms “holographic”, “3D”, “spatial”, “volumetric” are used interchangeably in reference to display technology. They describe various systems that enable on-screen visuals to appear different to each of the user's eyes (i.e. stereoscopic visuals) and based on the position of the user's head (i.e. head-motion parallax), which serve to accurately simulate depth and motion through 3D space. These systems can include lenticular displays, retinal projectors, and/or light-field displays, as well as face-tracking cameras to optimize imagery based on the position of the user's eyes and head. These methods of display do not require headwear or eyewear, as in other virtual or augmented reality systems.
As stated above, current stationary exercise equipment like treadmills, ellipticals and stationary bikes can be monotonous to use. While some come with integrated displays for entertainment while exercising, these displays fail to provide rich sensory immersion and the sense of motion that make outdoor exercise like running or cycling engaging to athletes. Embodiments of the disclosure solve this problem.
To make stationary exercise machines more immersive and engaging, as well as to enhance their physical and mental health benefits, embodiments of the disclosure integrate an holographic display that responds when, for example, force is exerted on the stationary exercise machine (e.g. when the user pedals a stationary bicycle), changing the imagery on screen to simulate motion through a virtual environment.
In some embodiments, the 3D display can also be utilized more generally to make on-screen content more engaging and immersive (e.g. displaying 3D movies as the user exercises). To further increase the immersiveness of the exercise experience, the spatial display on the exercise equipment can integrate with spatial speakers to simulate sound in virtual space.
Embodiments of the disclosure differ from what currently exists. The embodiments use a holographic display attached to and/or integrated with stationary exercise equipment to provide stereoscopic visuals and head-motion parallax that enhance the exercise experience. Additionally, a holographic display that is integrated with the exercise equipment is less nausea-inducing and less cumbersome than a virtual or augmented reality head-mounted display.
Stationary exercise equipment, even those with displays, provides a less immersive and engaging experience than outdoor or track exercise. As a result of their 2D displays, they can only convey depth and motion in a limited capacity through monoscopic perspective (e.g. using monoscopic video that displays distant virtual objects smaller than nearby virtual objects). These systems are unable to provide stereoscopic visuals or head-motion parallax required for rich sensory stimulation and convincing perception of motion.
Embodiments of the disclosure use an integrated and responsive 3D or holographic display attached to and/or integrated with stationary exercise equipment to create a more immersive, engaging and healthy stationary exercise experience. The holographic display provides a more stimulating sensory experience and can better simulate the perception of motion through a 3D virtual environment.
Also, embodiments of the disclosure functions as a software content delivery system. The software content (e.g. software applications) delivered through exercise equipment can utilize the sensors, displays, cameras and exercise equipment in different ways to create different exercise experiences. For example, one software application might virtually simulate cycling the Tour de France with other live participants connected via the internet in an immersive 3D scene generated by photogrammetry depth capture. Another application might simulate running through a virtual forest realistically rendered from a game engine. Yet another might simulate a fitness class with a personal trainer appearing as an animated avatar in a virtual gym. Other content might include holographic music visualization or computational art. Any application can leverage the array of display(s), speaker(s), sensor(s), and exercise equipment in different ways to create unique 3D and holographic experiences.
The exercise equipment according to the disclosed embodiments can be stationary exercise equipment (e.g. stationary bike, treadmill, elliptical, rowing machine, etc.) with electronic sensors to detect force exerted by user and/or capture other information relating to the user's movement when using the exercise equipment.
The exercise equipment disclosed herein can include 3D or holographic display (e.g., those that can utilize lenticular display technology, light-field display technology or projector-based display technology), positional-tracking camera system (for face-tracking) to record position of the user's eyes to adapt imagery from 3D or holographic display and better simulate depth perception through a 3D virtual environment.
The exercise equipment disclosed herein can additionally include computer hardware/software such as one or more processors to interpret sensor and camera data and output graphical content through the display.
Embodiments of the exercise equipment can display virtual content through the 3D or holographic display.
In another aspect of the disclosure, a method of connecting to the internet to stream updates or access content to an exercise apparatus is disclosed. The connection can be mediated through a connection to a mobile phone or can connect directly to WiFi or cellular networks.
The stationary exercise bike 100 can incorporate one or more sensors 101 to detect force exerted by the user and capture other data on how the user is using the stationary exercise bike 100. The sensors for detecting force and other data vary depending on the type of exercise equipment and may include potentiometers, gyroscopes and accelerometers as well as optical sensors like cameras. In some embodiments, the one or more sensors 101 can also include pressure sensors, rotation sensors, position sensors, cadence sensors, vibration sensors, etc.
In the embodiment illustrated in
For example, force sensors 101 in the pedals can detect the amount of force exerted by the user when pedaling. Alternatively or additionally, a cadence sensor 107 can be attached to the stationary bike's crank arm to measure the real time cadence when the stationary bike 100 is in use. Alternatively or additionally, one or more vibration sensors and/or accelerometers 109 can be attached or embedded in the frames of the stationary bike 100 to detect the vibration and/or tilting of the stationary bike 100 when in use. It should be understood that the exemplary sensors 101, 107, 109 shown in
Additionally, one or more sensors 110 may be integrated with the display 102 to detect the position and orientation of the display 102. These may be used to adapt on-screen content, for example, so that the horizon line displayed on-screen matches the real-world horizon line regardless of the angle and orientation of the display. Sensor 110 can be an accelerometer or a position sensor.
The processor 104 can receive signal(s) from the one or more sensors 101 and determine based on the signal(s), for example, the real-time speed at which the user is pedaling the bike and adjust the 3D or holographic environment being displayed on the display 102 accordingly to have the proper simulation of the user biking through the simulated environment.
The stationary bike of
The camera 103 can send in real time a signal embedding the information it captured to the processor 104. The processor can then process the signal from the camera to determine imagery output through the 3D or holographic display 102 that can adapt and accurately simulate depth-perception and motion through virtual space. As a result, while the user exerts force on the stationary exercise bike, the imagery displayed through the 3-dimensional or holographic display can respond to simulate motion through virtual space.
The 3D or holographic display 102 can be any existing display capable of providing content in a way that provides the user an immersive experience while using the stationary bike 100 without requiring the user to wear any virtual reality (VR) headset. For example, the 3-dimensional or holographic display 102 may utilize a lenticular display paired with the face-tracking camera 103 to detect the user's eye position. The lenticular display can rapidly output image frames directed alternately at the user's right and left eye to simulate stereoscopic depth perception. The imagery adapts to the position of the user's eyes to simulate visual parallax. Alternatively, the 3-dimensional or holographic display 102 can be a light-field, which directs photons along the proper vector to simulate their trajectory from a virtual scene and create the perception of depth. Lastly, holographic projectors provide another alternate method of simulating depth perception by tracking the user's eye position with the camera 103 and then projecting different images into each of the user's eyes to create a stereoscopic view of a virtual scene. The embodiments of the disclosure can use any of these methods or a combination of them.
The virtual content output through the 3-dimensional or holographic display 102 can respond to the sensor data input from the one or more sensors 101 of the stationary exercise bike 100. Conversely, the stationary exercise bike can adapt to virtual content. For example, to simulate a virtual hill, the resistance of the pedals of a stationary bicycle might increase to give the user a sense of pedaling up an actual hill. Additionally, the virtual content can adapt and respond to the positional-tracking cameras 103 to better simulate depth-perception. As well, the face-tracking cameras 103 might be used to better understand how the user is perceptually, emotionally, physiologically and psychologically experiencing their exercise. The virtual content can adapt to this information to optimize and customize the exercise experience for each individual user. For example, the camera(s) 103 might detect infrared light output from the user's body to infer heart-rate and blood-flow and adjust the exercise intensity to maintain a constant, optimum heart-rate. Additionally or alternatively, the exercise equipment might use a combination of cameras 103 with one or more electrodermal sensors 112 (positioned in the handlebars, for example) to detect perspiration and infer hydration levels to then prompt the user to drink liquid when needed. An array of sensors including cameras (e.g., camera 103), cadence and resistance sensors and electrodermal sensors (e.g., sensor 112) might track and interpret subtle variations in perspiration, heart-rate, exertion, eye-movement, facial expression, exercise technique, etc. to infer when the user is experiencing a peak rush of euphoria (known as “runner's high”) while exercising and synchronize visual and audio content to enhance euphoria, for example, by displaying more exciting imagery and louder music that match the rhythm of the user's heart-rate or exercise cadence. The audio content can be stored locally in a storage of the stationary exercise bike 101 or streamed from a remote source (e.g., a cloud server). The audio content can be synchronized with the visual content by the processor 104 in response to data captured by the one or more sensors 101, 107, 109, 110, 112, and camera(s) 103. One or more speakers 120 located at different locations on the stationary exercise bike 101 can output the audio content with the intended effects and/or volume.
Although the 3-dimensional or holographic display 102 is shown to have a flat display surface, it should be understood that the display 102 can have a surface of any type and any curvature. It should also be understood that the 3-dimensional or holographic display 102 can include multiple screens that combine to create the immersive visual experience for the user. It should also be understood that the display 102 can be of any size and shape.
As illustrated in
Referring back to
Content to be shown on the 3D or holographic display 102 and software or firmware updates can be streamed or downloaded over the internet. For example, the user can select a stage of the Tour de France to be rendered by the 3D or holographic display 102 while using the bike 100 to simulate competing in the race. Specifically, the 3D or holographic display 102 can display pre-existing (e.g., downloaded) continuous footage of a Tour de France race captured using an omnidirectional, stereoscopic camera on a vehicle (e.g., a bicycle or a car). The footage can be shown at a pace that corresponds to the user's pace on the stationary bike 100 and from different angles that change in response to the user's eye movements captured by the face tracking camera 103.
Sensor 202 can be a sensor placed under the belt of the treadmill to detect the force, timing, and/or location of the contact made by the user's feet. Sensor 202 can be multiple sensors placed at different locations under the belt. Additionally or alternatively, sensors 204 can be placed on areas on the top handlebars of the treadmill to detect any force from the user gripping the handle bars. Data detected by the sensors 202, 204 can be transmitted to the computer processor 210 of the treadmill 200.
The 3-dimensional or holographic display 206 and the face-tracking camera 208 of the treadmill 200 can be similar to the 3-dimensional or holographic display 102 and face-tracking camera 103 of the stationary bike 100 of
The processor 210 can process data received from the sensors 202, 204 and the camera 208 and the settings (e.g., degree of incline, speed setting) of the treadmill to determine the user's pace, lateral movement, head/eye movement, etc. when the user is using the treadmill 200. The processor 210 can then display on the display 206 a 3-dimensional or holographic imageries that simulate an immersive visual experience (e.g., running through a forest or on a race track against other runners) for the user. The imageries can be a video being streamed in real time or content pre-downloaded from a remote server such as a cloud server.
The treadmill 200 can optionally include additional sensors not shown in
It should be understood that both the stationary exercise bike 100 of
Processor 604 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 604 may be configured to receive data and/or signals from sensors, camera, other types of user interface such as a keypad or a touch screen, and/or other devices on the network and process the user input and received data and/or signals to determine the settings of the exercise apparatus including what content to be provided via the 3-dimensional or holographic display and how the content is provided.
Processor 604 may execute computer instructions (program codes) stored in memory 602 and/or storage 606, and may perform functions in accordance with exemplary techniques described in this disclosure. Memory 602 and storage 606 may include any appropriate type of mass storage provided to store any type of information that processor 604 may need to operate. Memory 602 and storage 606 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM. Memory 602 and/or storage 606 may be configured to store one or more computer programs that may be executed by processor 604 to perform exemplary functions disclosed in this disclosure including generating content for displaying on the 3-dimensional or holographic display of the exercise apparatus. For example, memory 602 and/or storage 606 may be configured to store program(s) that may be executed by processor 604 to determine the speed at which the content is being played on the display based on the pace of the user running on the treadmill. The program(s) may also be executed by processor 604 to provide an interface for interacting with a user.
Memory 602 and/or storage 606 may be further configured to store information and data used by processor 604. Memory 602 and/or storage 606 may be configured to store real-time streaming or pre-downloaded video content and/or software updates to the exercise machine.
Referring again to
I/O interface 608 can allow the exercise apparatus 600 to interact with a user. For example, the I/O interface 608 can be a touch screen that displays an interactive screen for the programs (or apps) running on the exercise apparatus 600. The touch screen can also receive touch or gesture input from a user. Any other conventional I/O interface can also be incorporated into the apparatus.
In another aspect of the disclosure, a method of providing an immersive experience to a user of an exercise machine is provided. The method can be performed by software, firmware, hardware, and/or a combination thereof. The software and firmware can be stored in a local storage or hosted on a remote server connected to the exercise equipment.
Next, the processor initiates the exercise machine based on the user inputs (step 402). As the user starts exercising on the exercise machine, the processor can receive real-time data from the one or more sensors capturing the movement, force, and other data associated with the user's action (step 403). Similarly, the processor can also receive information from the face-tracking camera that tracks, for example, the user's eye movement (step 404). The processor can analyze the sensor data in combination with the camera data to display or simulate a virtual environment via the 3D or holographic display of the exercise machine for the user (step 405). The processor can optionally continue to make real-time adjustments to the virtual environment based on the data received from the sensors and the camera (step 406). Additionally or alternatively, the processor can generate feedback (e.g., changing the incline on the treadmill) by adjusting the settings of the exercise equipment based on the virtual environment being shown to the user (step 407). When the user finishes his/her exercise, the processor can turn off the display (step 408).
In yet another aspect of the disclosure, embodiments of exercise equipment can also function as a software content delivery system. The software content (i.e. software applications) delivered through the exercise equipment can utilize the sensors, displays, cameras and exercise equipment in different ways to create different exercise experiences. For example, one software application might virtually simulate cycling the Tour de France with other live participants connected via the internet. Another application might simulate running through the woods. Any application can leverage the array of display(s), sensors, and exercise equipment in different ways to create unique experiences for the user.
In another aspect of the disclosure, an exercise apparatus with an integrated performance verification system is disclosed.
Current athletic esports platforms such as virtual cycling and rowing races are vulnerable to cheating, especially when participants connect remotely, and it is not possible to verify their physical performance through direct human observation. Dishonest participants can cheat using methods such as interfering with exertion sensors in exercise equipment, attaching external motors that turn gears of exercise equipment to simulate athletic exertion, and hacking various layers of the software systems that esports platforms rely on. High numbers of remote participants in massively multiplayer environments can further exacerbate the challenge of verifying individual performance. As well, some exercise equipment categories like treadmills do not have integrated mechanisms that respond to physical exertion, such as cadence and wattage sensors common in cycling and rowing, instead estimating athletic output from the speed of the motorized tread, presenting additional challenges with performance verification. As a result, much performance data goes unverified or not thoroughly verified, particularly in casual multiplayer settings, allowing dishonest participants to outpace other players, score points, earn rewards and even qualify for competitive events.
Embodiments of the disclosure utilize computer vision systems integrated with exercise equipment alongside common exercise exertion sensors like cadence and wattage gauges to verify that the recorded physical exertion output aligns with biometric signals simultaneously registered by the computer vision system. The camera sensors and computer vision systems used to measure biometric signals for the purposes of performance verification and cheating prevention can be the same as or used in concert with those used in a stereoscopic display attached to the exercise equipment such as the stationary bike 100 of
For example, one approach to verify whether a person is actually exerting the effort that the ergometer is reading would be to analyze camera footage of a person while they use exercise equipment like a rowing machine and determining whether their body motion during each rowing stroke aligns with the expected body motion to exert the force reported by the ergometer during the rowing stroke. The expected body motion in this case would be predetermined by analyzing a large sampling of camera footage of people performing similar exercise. Another approach to measure other biometric signals such as heart-rate, circulation and blood-oxygen saturation involves optical sensors (i.e. cameras) and illuminators using photoplethysmography. These optical sensors can be placed on the exercise apparatus so as to allow for close skin contact such as with handlebars. Alternatively, the camera sensor used can be the same as the one used to detect body motion and would be placed further from the user for non-contact photoplethysmography.
The exercise apparatus 500 can optionally include additional sensors such as exercise exertion sensors like cadence sensor 506 and wattage gauge 508. Other types of sensors such as pressure sensors, rotation sensors, position sensors, vibration sensors (not shown in
The exercise apparatus can also include a processor 510 in communication with the computer vision system 504 and the optional additional sensors 506, 508. The processor 510 can receive biometric signals and other data from the computer vision system 504. The processor 510 can also receive signals from the additional sensors 506, 508. The processor 510 can verify whether the physical performance level indicated by one or more of the biometric signals aligns with the physical performance level simultaneously recorded by the other sensors (e.g., exertion sensors 506, 508) of the exercise apparatus 500.
For example, the processor can determine whether the body motion patterns and the amount of perspiration detected are consistent with the same performance level (e.g., pedaling/running speed). If both the body motion patterns captured by the computer vision system 504 and the signals received from the exercise exertion sensors 506, 508 indicate that the user is pedaling or running at about the same speed, the user's performance is verified. In contrast, if the camera data (e.g., body motion patterns) shows that the user is running at a slower pace than the data from the exercise exertion sensors 506, 508 indicates, the processor 510 can determine that the data from at least some of the exercise exertions sensors 506, 508 may not be accurate in representing the user's actual performance on the exercise apparatus. This can provide a mechanism to verify user performance in an esport competition taking place remotely on participants' own exercise apparatus.
In some embodiments, the processor 510 may not be on the exercise apparatus 500 but instead be on a remote computer (e.g., a cloud processor) that is connected to the exercise apparatus via a network such as the Internet. Sensor data from the computer vision system 504 and the other sensors 506, 508 can be transmitted via the network to the remote computer for processing.
In some embodiments, when measured together by the computer vision system 504, the processor can cross-check multiple biometric signals with each other as well as the output of the exertion sensor(s), creating a stronger verification system that is more difficult to dupe. For example, in the case of an activity like rowing where stroke form contributes to overall output, analyzing body motion can help verify that the user's stroke rate and technique match the exertion readout from the ergometer. However, body motion analysis alone may not be sufficient to prevent cheating. Analyzing heart-rate alongside body motion can further verify that the user's cadence output aligns with the ergometer readout, helping to ensure they are not artificially amplifying their stroke force. These separate methods together can magnify the cumulative effectiveness more than the sum of their parts as it becomes much more difficult to dupe multiple systems simultaneously.
In some embodiments, it is also possible for the processor 510 to bypass the process of inferring biometric measures like heart-rate altogether and instead only analyze variations in the raw biometric signals emitted by the user as detected by the camera sensors. For example, using this method to correlate exertion with skin light absorption is more direct than correlating exertion with the inferred heart-rate, which is an extrapolation and abstraction of the observed biometric signal of skin light absorption. Such systems can be facilitated by neural network models trained on data sets collected from numerous users in a variety of settings and conditions.
In some contexts, such as casual non-competitive exercise, simpler computer vision systems that track fewer biometric signals might be sufficient to verify performance whereas other more stringent use-cases such as esports competitions might require more robust verification that relies on a variety of biometric signals and computer vision systems. In some embodiments, the exercise apparatus can have different performance verification settings that can be set either by the user or by a remote computer based on the context of an exercise session. For example, for a formal esports competition, each of the participating exercise apparatus can be connected to a central computer over a network. The central computer can set the performance verification setting on each exercise apparatus to the highest level, enabling all verification mechanisms available on the apparatus. The central computer can also lock the setting so the user cannot override it when in competition. In contrast, for a casual training session, the performance verification setting can be turned off.
In some embodiments, analyzing data of simultaneous output from the exertion sensor and optical sensors (e.g., cameras) from numerous different users and their exercise sessions can establish a baseline correlation that can then be used to verify an individual user's physical performance. The aggregate session data can also be compared to details about a particular user such as height, weight, and age as well as data collected from previous exercise sessions data to establish an individual baseline specific to that user.
Referring again to
Data from one or more other sensors (e.g., exercise exertion sensors) on the exercise apparatus can be processed by a processor to determine the veracity of a second set of user performance data. (Step 705) The first and second sets of performance data are synchronized and compared to determine if there are significant inconsistencies between the data set. (Step 706) Any significant inconsistencies can trigger an alert about that user's performance (Step 707) that can be handled by the system differently depending on the context and programming. For example, in the context of a competitive race, the alert can be redirected to a supervising authority such as a human referee that can review and determine if there has been a rule violation based on the evidence provided by the system. It should be understood that some of the steps illustrated in
In yet another aspect of the disclosure, systems and methods of enhancing the perception of motion in athletic simulations are disclosed. As discussed in the embodiments above, digital simulations of cardiovascular sport activities such as cycling, running, and rowing can help alleviate the monotony of indoor stationary exercise by visualizing the user moving through virtual environments on screens attached to exercise equipment such as stationary bikes, treadmills, and rowing machines. Conveying a convincing sense of speed and motion on-screen is critical to simulating the enjoyable experience of training in the real-world.
However, movement through virtual worlds displayed on a conventional screen can feel slower than it would in the real-world even when the simulated pace in the virtual scene accurately matches what would be experienced in the real-world based on the output exertion of the user. For example, cycling at a rate of 20 miles per hour is a rigorous pace and feels very fast in the real-world but simulating that pace of movement through virtual space on a conventional screen can feel much slower. This can make exercising with these simulators less engaging. Several factors contribute to this difference in the perception of speed. The real-world visual sense of speed is influenced by the wider field-of-view, peripheral vision, and stereoscopic depth perception of the person in motion relative to their environment.
While head-mounted displays or immersive dome displays can create an accurate visual sense of speed by providing wider field-of-view and peripheral vision, these methods have an increased risk of inducing nausea and motion sickness. As will be discussed in detail below, one method to alleviate this is to artificially restrict field of view when the user is moving even slowly, creating a tunnel vision effect and to simulate an amplified sense of motion within a more limited field of view.
In some embodiments of the disclosure, systems of enhancing the perception of motion in athletic simulations (e.g., using stationary exercise apparatuses) are disclosed. To enhance the perception of motion, the systems can produce responsive visual effects applied to the viewport of a stereoscopic display attached to exercise equipment such as stationary bikes, treadmills, and rowing machines. The motion perception amplification results from two visual effect techniques, the viewport refraction system and viewport particle system, that each can dynamically respond to the user's physical exertion and body movement measured by sensors on the exercise equipment.
The viewport refraction system enhances the perception of motion by exaggerating the field of view rendered by the stereoscopic display. This expanded view allows users to see more of the objects and terrain close around them, such as the ground just in front of them or the trees along their periphery. While the user moves through the virtual environment, these nearfield visual markers appear to move by relatively faster than objects far away. By making nearby objects and terrain more visible to the user, the viewport refraction effect can accentuate the user's sense that they are moving fast through virtual space.
In one variation, the intensity of the refraction effect responds to the velocity of the user in virtual space and their exerted effort as recorded by the ergometer on the exercise equipment so that when they exert more effort and move faster through virtual space, they see a wider field of view. This makes changes in virtual velocity even more pronounced and makes exerted effort feel more gratifying.
The system can include a velocity determining module 806 in communication with the signal receiving module 802. The signal receiving module 802 can pass the information in the signals from the one or more sensors 803, 804 to the velocity determining module 806. The velocity determining module 806 can determine based on the information (e.g., force captured by the ergometer) a velocity (or pace) at which the user is pedaling, running, or rowing on the exercise apparatus.
The system can further include a graphic enhancing module 808 in communication with the velocity determining module 806. The graphic enhancing module 808 can receive the velocity from the velocity determining module 806 and modify the graphics based on the velocity. In one embodiment, the graphic enhancing module 808 can add the fraction effect discussed above. That is, while the user moves through the virtual environment as he or she engages in an exercise on the exercise apparatus (e.g., the stationary bike 100 of
One of the increased effects can be a wider field of view being provided on the display. To achieve this expanded field of view in the virtual viewport of the stereoscopic display, the graphic enhancing module 808 can have the left and right eye image warped separately according to the same curvature resulting in a visual effect similar to the physical phenomenon of a convex lens refracting light so that the resulting image seen through the lens has a wider field of view. Additionally, the graphic enhancing module 808 can be programmed with the capability to differentiate between components of the virtual scene that should be warped or unwarped. For example, graphic interface elements, text, particles and certain objects might not be intended to be warped whereas the landscape and terrain must be warped to achieve the expanded field of view. One method to achieve this uses multi-pass rendering to differentiate between components of the virtual scene that should appear warped or unwarped. The warped and unwarped passes are rendered separately and then composited together by the graphic enhancing module 808. Another method might use a single render pass for all components in a scene that could differentiate between warped and unwarped objects depending on their placement in the virtual scene. For example, objects that appear in front of the stereoscopic display panel might be unwarped, whereas those that appear behind the display panel are warped.
In addition to expanding the field of view, the graphic enhancing module 808 can also amplify the perception of depth to enhance the sense of speed depending on how the refraction distortion is applied across the rendered image.
Referring back to
In some embodiments, zoom particles are another effect that can enhance the perception of motion by populating the user's viewport with small moving points, streaks and artifacts that respond to the user's physical exertion and virtual velocity. The particle trajectory flows nearby and through the viewport in virtual space so that they appear to move by the user faster than objects and terrain that are farther away, creating an even more prominent motion parallax effect. These particles originate in front of the viewport and move toward the viewport along the user's motion trajectory in virtual space. The particle trajectory can respond to information from the eye, head and body tracking camera sensors attached to the exercise equipment, such that they move closely by the user's eyes, head and body without colliding. When used with a stereoscopic display, this is critical to ensure the particles avoid intersecting with the user's eyes and head which can induce an uncomfortable sensation that causes the user to flinch. This enables the particles to appear as close as possible to the user, maximizing the perception of speed without causing discomfort.
According to the embodiments, the intensity of the zoom particles is determined by several parameters that can respond dynamically to the readings from the ergometer and other sensors on the exercise equipment as well as other isolated events and factors in the virtual environment to create engaging visual feedback that further accentuates the user's perception of motion. These parameters can include but are not limited to the number of particles, their spawn rate, the duration that they remain visible, as well as their trajectory, velocity, speed, color, opacity, etc. Together these parameters determine the overall intensity of the zoom particle effect. Each can respond differently to user input and environmental factors. For example, particle velocity might correspond to the simulated velocity of the user while other parameters like particle size, color and opacity might be determined more by the current wattage or cadence readout from the ergometer. Since virtual velocity is determined by environmental factors such as virtual inclination, wattage and cadence are more direct measures of exerted effort, resulting in visual feedback from particles that feels more responsive to user input. Combining parameters that respond to velocity with those that respond to wattage, cadence and other factors can create a balanced effect that is optimized to enhance both responsiveness to input and the perception of speed.
These zoom particles are especially effective at enhancing the perception of motion because their velocity and intensity does not have to be physically accurate and can be independent from the movement of the rest of the virtual environment relative to the user. While it is beneficial for the purposes of accurate simulation that the user moves through virtual space at a pace that aligns with real-world physics based on their exertion even though this can feel slow, particles can be optimized to maximize the enjoyment and gratification of exercise regardless of virtual physics. For example, if the user is moving through the virtual world at 15 miles per hour based on their exertion, the particles can move by the user at a rate of 45 miles per hour to artificially amplify the user's perception of speed.
The system 1000 can include a velocity determining module 1006 in communication with the signal receiving module 1002. The signal receiving module 1002 can pass the information in the signals from the one or more sensors 1003, 1004 to the velocity determining module 1006. The velocity determining module 1006 can determine based on the information (e.g., force captured by the ergometer) a velocity (or pace) at which the user is pedaling, running, or rowing on the exercise apparatus.
The system 1000 can also include a zoom particle generating module 1008 in communication with the velocity determining module 1006. The zoom particle generating module 1008 can receive the velocity of the user from the velocity determining module 1006. The zoom particle generating module 1008 can determine one or more parameters of the zoom particles based on the velocity. In some embodiments, the zoom particle generating module 1008 can additionally consider other isolated events and factors in the virtual environment in determining the one or more parameters of the zoom particles. Such isolated events and factors can include, for example, virtual inclination and changes in exertion as measured by the ergometer, changes in heart rate of the user, and/or changes in background sounds and music in the virtual experience.
In various embodiments, the parameters can include but are not limited to the number of particles, their spawn rate, the duration that they remain visible, as well as their trajectory, velocity, speed, color, opacity, etc. Together these parameters determine the overall intensity of the zoom particle effect. The zoom particle generating module 1008 can set the parameters to different values based on user input and/or environmental factors. For example, the zoom particle generating module 1008 can set the particle velocity based on the simulated velocity of the user, which, in turn, can be determined based on the velocity provided by the velocity determining module 1006. The zoom particle generating module 1008 can set other parameters such as particle size, color and opacity of the zoom particles based on the current wattage or cadence readout from the ergometer. The zoom particle generating module 1008 can then combine the parameters that respond to velocity with those that respond to wattage, cadence and other factors to create a balanced effect that is optimized to enhance both responsiveness to input and the perception of speed.
Referring again to
The graphic enhancing module 1010 can be in communication with a display 1012 of the exercise apparatus. The display 1012 can display the graphics being provided by the graphic enhancing module 1010 including the particles to create the zoom particle effect to improve the immersive visual experience of the user of the exercise apparatus. The display 1012 can, for example, be the 3D or holographic displays 102, 104 of
It should be understood that the system 800 of
The viewport refraction system and viewport particle system are complimentary, providing a range of visual cues across the entire depth field of the viewport, from the nearest field close to the user to the distant horizon. When integrated with an ergometer, tracking camera and other sensors they provide visceral and responsive feedback to users, making exercise more gratifying. In some embodiments, the systems 800, 1000 can be tuned to respond to input so as to guide the user's behavior with visual cues for an optimal exercise session. One method to achieve this involves modulating the intensity of the visual effects (e.g., viewport fraction, zoom particle effects) according to a non-linear correlation to exertion. For example, the graphic enhancing modules 808, 1010 of systems 800, 1000, respectively, can adjust the intensity of the motion perception amplification (e.g., effects of viewport fraction and/or zoom particle effects) in response to exertion according to a sigmoidal curve, such that the rate of change is more drastic within a target exertion range, making the ramp up to a target exertion threshold more gratifying and any decline below that threshold more pronounced. This allows users to viscerally feel whether they are getting closer to or deviating from the target threshold of exertion. Varying this target exertion threshold can be used to nudge the user to adjust their exercise intensity according to an optimal interval training pattern, such as high, medium, or low intensity interval training, depending on the user's individual needs or preferences.
The dynamic feedback provided by the viewport refraction and viewport particle system can provide rhythmic stimuli that combine with other stimuli such as sound and haptic vibrations to induce neural entrainment in the user, a phenomenon wherein electrical oscillations in the brain naturally synchronize to external stimuli. Embodiments of the disclose incorporate this technique to elicit targeted mental states that correspond to specific electrical oscillations in the brain and can be optimal for exercise and wellness.
In one embodiment, targeting desired mental states can be achieved by training an artificial intelligence (AI) or deep learning model with data collected during exercise sessions wherein one or more cameras and/or sensors observe and record electrical signals from participant's brains or more indirect biometric signals such as eye-movement, heart rate and perspiration that correspond with such electrical brain activity and associated mental states.
By recording such brain activity and biometric signals while generating and testing different variations and combinations of stimuli including viewport refraction and viewport particles as well as other visuals, sounds and haptics, an AI model 1108 in communication with the data recorder 1106 and the viewport refraction/viewport particles system 1110 (and other systems for generating stimuli) to learn which combinations of stimuli are associated with which electrical brain activities and biometric signals. The AI model 1108 can also receive self-reporting from the user on user's experience on the exercise apparatus. The AI model 1108 can associate the patterns of brain activity and biometrical signals with desired mental states that can be categorized as relaxed, focused, determined, etc. based on analysis of exercise performance, user self-reporting, or comparisons to known brain activity and biometric patterns. The model trained from this analysis is used to generate an exercise experience 1111 that adapts viewport refraction and/or viewport particle systems 1110 and, optionally, in conjunction with other stimuli generated by other stimulating system(s) 1112 in real-time to elicit desired mental states for an optimal workout.
The various modules of
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed systems and methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the systems and methods. It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.
This application claims the priority of U.S. patent application Ser. No. 18/587,333, filed on Feb. 26, 2024, which claims the priority of U.S. patent application Ser. No. 17/334,441, filed on May 28, 2021, and issued as U.S. Pat. No. 11,908,476 on Feb. 20, 2024, which claims the priority of U.S. Provisional Application Ser. No. 63/118,149, filed on Nov. 25, 2020. The entirety of all three priority applications is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63118149 | Nov 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18587333 | Feb 2024 | US |
Child | 18656571 | US | |
Parent | 17334441 | May 2021 | US |
Child | 18587333 | US |