Not Applicable.
Not Applicable.
Not Applicable.
Not Applicable.
The invention generally relates to a head-mounted display system. More particularly, the invention relates to a head-mounted display system configured to be used for balance testing and training.
Measurement and testing systems are utilized in various fields to detect and analyze many different measurable quantities. For example, in biomedical applications, measurement and testing systems are used for gait analysis, assessing balance and mobility, evaluating sports performance, and assessing ergonomics. However, conventional measurement and testing systems have numerous limitations and drawbacks.
For example, conventional measurement and testing systems with large measurement surface areas and large visual displays are complex, difficult to install, and are not easily adaptable to different space configurations in a building. Also, these conventional measurement and testing systems are typically cost prohibitive for small clinical applications.
What is needed, therefore, is a head-mounted display system for assessing balance and mobility. Moreover, a head-mounted display system is needed for enhancing the visual motor performance of individuals. Furthermore, a need exists for a head-mounted display system as part of the rehabilitation regime for an orthopedic and/or neurological injury.
Accordingly, the present invention is directed to a head-mounted display system that substantially obviates one or more problems resulting from the limitations and deficiencies of the related art.
In accordance with one or more embodiments of the present invention, there is provided a head-mounted display system that includes an input device, the input device configured to output an input signal based upon an input response by a user; a head-mounted visual display device having an output screen, the head-mounted visual display device configured to display one or more screen images on the output screen so that the one or more screen images are viewable by the user; and at least one data processing device, the at least one data processing device operatively coupled to the input device and the head-mounted visual display device. The at least one data processing device being programmed to: (i) generate and display at least one displaceable visual object and at least one visual target on the output screen of the head-mounted visual display device; (ii) receive an input signal from the input device based upon an input response by the user; and (iii) control the movement of the at least one displaceable visual object towards the at least one visual target based upon the input signal received from the input device.
In a further embodiment of the present invention, the input device comprises a hand controller and the input signal comprises one or more hand control signals outputted by the hand controller, the one or more hand control signals being generated based upon a hand movement of the user; and the at least one data processing device is configured to receive the one or more hand control signals that are generated based upon the hand movement of the user, and to control the movement of the at least one displaceable visual object on the output screen of the head-mounted visual display device towards the at least one visual target.
In yet a further embodiment, the at least one data processing device is further configured to determine how closely the user is able to align the at least one displaceable visual object relative to the at least one visual target.
In still a further embodiment, the at least one visual target generated and displayed on the output screen by the at least one data processing device comprises a plurality of visual targets displayed in a particular sequence on the output screen, and the at least one data processing device is further configured to determine whether the user is able to correctly identify the plurality of visual targets displayed in the particular sequence when the user selects the plurality of visual targets using the at least one displaceable visual object on the output screen.
In yet a further embodiment, the at least one visual target generated and displayed on the output screen by the at least one data processing device comprises a plurality of visual targets displayed in a predetermined pattern on the output screen for a predetermined period of time, and the at least one data processing device is further configured to determine whether the user is able to correctly identify the plurality of visual targets displayed in the predetermined pattern when the user selects the plurality of visual targets using the at least one displaceable visual object on the output screen.
In still a further embodiment, the at least one visual target generated and displayed on the output screen by the at least one data processing device comprises a first visual object having a first color or shape and a second visual object having a second color or shape, and the first color or shape is different from the second color or shape. When the user is presented with the first visual object having the first color or shape, the at least one data processing device is programmed to determine whether the user performs a correct action by selecting the first visual object. When the user is presented with the second visual object having the second color or shape, the data processing device is programmed to determine whether the user performs a correct action by not selecting the second visual object.
In yet a further embodiment, the at least one visual target generated and displayed on the output screen by the at least one data processing device comprises a plurality of visual targets displayed on the output screen, and the at least one data processing device is further configured to randomly mark one of the plurality of visual targets and determine how quickly the user is able to correctly identify the marked one of the plurality of visual targets on the output screen.
In still a further embodiment, the at least one data processing device is further configured to generate and display a cognitive task on the output screen of the head-mounted display system together with the plurality of visual targets, and to determine whether the user is able to correctly perform the cognitive task when identifying the marked one of the plurality of visual targets on the output screen.
In yet a further embodiment, the user input device comprises a measurement assembly and the input signal comprises one or more measurement signals outputted by one or more measurement devices of the measurement assembly, and the one or more measurement signals are generated based upon the user's contact with a surface of the measurement assembly. The data processing device is configured to receive the one or more measurement signals that are generated based upon the user's contact with the surface of the measurement assembly and to compute one or more numerical values using the one or more measurement signals, and the data processing device is configured to control the movement of the at least one displaceable visual object on the output screen of the head-mounted visual display device towards the at least one visual target by using the one or more computed numerical values.
In still a further embodiment, the at least one data processing device is further configured to generate and display a displaceable scene on the output screen of the head-mounted visual display device, and the at least one visual target is superimposed on the displaceable scene.
In yet a further embodiment, the at least one visual target comprises a plurality of visual targets on the output screen of the head-mounted visual display device, and the at least one data processing device is further configured to determine how closely the user is able to displace the at least one displaceable visual object to each of the plurality of visual targets on the output screen.
In still a further embodiment, the at least one data processing device is provided as part of the head-mounted visual display device.
In yet a further embodiment, the at least one data processing device is separate from the head-mounted visual display device.
In still a further embodiment, the at least one data processing device comprises a first data processing device that is provided as part of the head-mounted visual display device and a second data processing device that is separate from the head-mounted visual display device.
In yet a further embodiment, the first data processing device of the head-mounted visual display device communicates wirelessly with the second data processing device that is separate from the head-mounted visual display device by means of a secure wireless connection.
In still a further embodiment, the second data processing device is programmed to determine whether the first data processing device of the head-mounted visual display device is running outdated software or requires an updated operating configuration; and, when the second data processing device determines that the first data processing device of the head-mounted visual display device is running outdated software or requires an updated operating configuration, the second data processing device automatically updates the software or operating configuration.
In accordance with one or more other embodiments of the present invention, there is provided a head-mounted display system that includes an input device, the input device configured to output an input signal based upon an input response by the user; a head-mounted visual display device having an output screen, the head-mounted visual display device configured to display one or more screen images on the output screen so that the one or more screen images are viewable by the user; and at least one data processing device, the at least one data processing device operatively coupled to the input device and the head-mounted visual display device. The at least one data processing device being programmed to: (i) generate and display at least one visual target on the output screen of the head-mounted visual display device; (ii) receive an input signal from the input device based upon an input response by the user; (iii) determine an orientation angle of a body portion of the user based upon the input signal received from the input device; and (iv) determine how closely the orientation angle of the body portion of the user corresponds to a tilt of the at least one visual target on the output screen of the head-mounted visual display device.
In a further embodiment of the present invention, the input device comprises a head position sensing device and the input signal comprises one or more measurement signals outputted by the head position sensing device, the one or more measurement signals being generated based upon a head movement of the user, and the orientation angle of the body portion of the user comprises a head angle of the user based upon the one or more measurement signals outputted by the head position sensing device.
In yet a further embodiment, the head-mounted visual display device comprises the head position sensing device.
It is to be understood that the foregoing summary and the following detailed description of the present invention are merely exemplary and explanatory in nature. As such, the foregoing summary and the following detailed description of the invention should not be construed to limit the scope of the appended claims in any sense.
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
Throughout the figures, the same parts are always denoted using the same reference characters so that, as a general rule, they will only be described once.
The present invention is described herein, in an exemplary manner, with reference to computer system architecture and exemplary processes carried out by the computer system. In one or more embodiments, the functionality described herein can be implemented by computer system instructions. These computer program instructions may be loaded directly onto an internal data storage device of a computing device (e.g., an internal data storage device of a laptop computing device and/or a data processing device within a head-mounted display). Alternatively, these computer program instructions could be stored on a portable computer-readable medium (e.g., a flash drive, etc.), and then subsequently loaded onto a computing device such that the instructions can be executed thereby. In other embodiments, these computer program instructions could be embodied in the hardware of the computing device, rather than in the software thereof. It is also possible for the computer program instructions to be embodied in a combination of both the hardware and the software.
This description describes in general form the computer program(s) required to carry out the functionality of the head-mounted display system described herein. Any competent programmer in the field of information technology could develop a system using the description set forth herein.
For the sake of brevity, conventional computer system components, conventional data networking, and conventional software coding will not be described in detail herein. Also, it is to be understood that the connecting lines shown in the block diagram(s) included herein are intended to represent functional relationships and/or operational couplings between the various components. In addition to that which is explicitly depicted, it is to be understood that many alternative or additional functional relationships and/or physical connections may be incorporated in a practical application of the system.
1. Illustrative Head-Mounted Display System
An illustrative embodiment of a head-mounted display (HMD) system is seen generally at 100 in
In the illustrative embodiment, the head-mounted display 30 may have the following exemplary specifications: (i) a single liquid crystal display (LCD) binocular display, (ii) a resolution of at least 1830×1920 per eye, (iii) a refresh rate of at least 90 Hz, (iv) a horizontal visible field of view (FOV) of at least 98 degrees, (v) a vertical visible field of view (FOV) of at least 90 degrees, (vi) a built-in eye tracking device, (vii) a Qualcomm Snapdragon 865 chip set, (viii) at least 6 GB memory, and (ix) at least 128 GB storage.
Referring again to the illustrative embodiment of
In addition, as shown in the illustrative embodiment of
In the illustrative embodiment, while performing the training routines and tests described hereinafter, the user may use the user input devices 34, 36 in order to enter and transmit his or her responses to the at least one data processing device (e.g., to the data processing device in the head-mounted display 30 and/or the remote laptop 20). For example, the user may use the user input devices 34, 36 to select a particular visual object on the output screen of the visual display device 32 of the head-mounted display 30.
In the illustrative embodiment, the head-mounted display system 100 may further include headset auto-discovery and connection functionality. In particular, the software running on the headset 30 may respond to a status ping over standard WiFi networking UDP broadcasts, initiating a connection back to the base station (e.g., the laptop computing device 20) that sent the status ping. This in turn creates a handshaking opportunity for the two systems to negotiate a secure crypto client-server protocol.
In the illustrative embodiment, the head-mounted display system 100 may further include a headset hands-off configuration. In particular, if during the auto discovery and connection, the base station (e.g., the laptop computing device 20) determines that the headset 30 is running outdated software or an updated configuration is needed, the user is prompted to plug the headset 30 into the base station via a standard USB cable. The laptop software will then configure the headset 30 and restart it, performing all needed updates and configuration steps without the user's involvement or actions (typically, this would need the user to wear the headset and use the controllers to navigate through several sub-menus and programs, and copy files from the computer or USB thumb drive into the headset). In the illustrative embodiment, the total automated configuration time is normally under two (2) minutes.
Now, turning again to the illustrative embodiment of
Referring again to
In the illustrative embodiment, the force measurement assembly 10 is operatively coupled to the data processing device 20 by virtue of an electrical cable. In one embodiment, the electrical cable is used for data transmission, as well as for providing power to the force measurement assembly 10. Various types of data transmission cables can be used for the cable of the force measurement assembly 10. For example, the cable can be a Universal Serial Bus (USB) cable or an Ethernet cable. Preferably, the electrical cable contains a plurality of electrical wires bundled together, with at least one wire being used for power and at least another wire being used for transmitting data. The bundling of the power and data transmission wires into a single electrical cable advantageously creates a simpler and more efficient design. In addition, it enhances the safety of the training environment for the user. However, it is to be understood that the force measurement assembly 10 can be operatively coupled to the data processing device 20 using other signal transmission means, such as a wireless data transmission system. If a wireless data transmission system is employed, it is preferable to provide the force measurement assembly 10 with a separate power supply in the form of an internal power supply or a dedicated external power supply.
Now, the acquisition and processing of the load data carried out by the illustrative embodiment of the head-mounted display system 100 will be described. Initially, a load is applied to the force measurement assembly 10 by the user disposed thereon. The load is transmitted from the first and second plate components 12, 14 of the dual force plate 10 to its force transducer beams. In the illustrative embodiment, each plate component 12, 14 of the dual force plate 10 is supported on a pair of force transducer beams disposed thereunder. In the illustrative invention, each of the force transducer beams includes a plurality of strain gages wired in one or more Wheatstone bridge configurations, wherein the electrical resistance of each strain gage is altered when the associated portion of the associated beam-type force transducer undergoes deformation (i.e., a measured quantity) resulting from the load (i.e., forces and/or moments) acting on the first and second plate components 12, 14. For each plurality of strain gages disposed on the force transducer beams, the change in the electrical resistance of the strain gages brings about a consequential change in the output voltage of the Wheatstone bridge (i.e., a quantity representative of the load being applied to the measurement surface). Thus, in the illustrative embodiment, the pair of force transducer beams disposed under the plate components 12, 14 output a total of six (6) analog output voltages (signals). In the illustrative embodiment, the six (6) analog output voltages from dual force plate are then transmitted to a preamplifier board (not shown) for preconditioning. The preamplifier board is used to increase the magnitudes of the transducer analog voltages, and preferably, to convert the analog voltage signals into digital voltage signals as well. After which, the force measurement assembly 10 transmits the force plate output signals SFPO1-SFPO6 to a main signal amplifier/converter. Depending on whether the preamplifier board also includes an analog-to-digital (A/D) converter, the force plate output signals SFPO1-SFPO6 could be either in the form of analog signals or digital signals. The main signal amplifier/converter further magnifies the force plate output signals SFPO1-SFPO6, and if the signals SFPO1-SFPO6 are of the analog-type (for a case where the preamplifier board did not include an analog-to-digital (A/D) converter), it may also convert the analog signals to digital signals. In the illustrative embodiment, the force plate output signals SFPO1-SFPO6 may also be transformed into output forces and/or moments (e.g., FLz, MLx, MLy, FRz, MRx, MRy) by the firmware of the dual force plate by multiplying the voltage signals SFPO1-SFPO6 by a calibration matrix prior to the force plate output data being transmitted to the data processing device 20. Alternatively, the data acquisition/data processing device 20 may receive the voltage signals SFPO1-SFPO6, and then transform the signals into output forces and/or moments (e.g., FLz, MLx, MLy, FRz, MRx, MRy) by multiplying the voltage signals SFPO1-SFPO6 by a calibration matrix.
After the voltage signals SFPO1-SFPO6 are transformed into output forces and/or moments (e.g., FLz, MLx, MLy, FRz, MRx, MRy), the center of pressure for each foot of the user (i.e., the x and y coordinates of the point of application of the force applied to the measurement surface by each foot) may be determined by the data acquisition/data processing device 20. If the force transducer technology described in U.S. Pat. No. 8,544,347 is employed, it is to be understood that the center of pressure coordinates (xP
In one or more alternative embodiments, the data processing device 20 determines the vertical forces FLz, FRz exerted on the surface of the first and second force plates by the feet of the user and the center of pressure for each foot of the user, while in another embodiment where a six component force plate is used, the output forces of the data processing device 20 includes all three (3) orthogonal components of the resultant forces acting on the two plate components 12, 14 (i.e., FLX, FLy, FLz, FRx, FRy, FRz) and all three (3) orthogonal components of the moments acting on the two plate components 12, 14 (i.e., MLx, MLy, MLz, MRx, MRy, MRz). In yet other embodiments of the invention, the output forces and moments of the data processing device 20 can be in the form of other forces and moments as well.
In the illustrative embodiment, where a single set of overall center of pressure coordinates (xP, yP) are determined for the force measurement assembly 10, the center of pressure of the force vector {right arrow over (F)} applied by the user to the measurement surface of the force plate 22 is computed as follows:
where:
In addition, in a further embodiment, the head-mounted display system 100 further comprises a data interface configured to operatively couple the data processing device 20 to a remote computing device (e.g., remote laptop or desktop computing device) so that data from the data processing device 20 is capable of being transmitted to the remote computing device. In one or more embodiments, the data interface may comprise a wireless data interface or a wired data interface operatively coupling the data processing device 20 to the remote computing device.
2. Testing and Training Functionality of the Head-Mounted Display System
Now, with reference to the screen images of
An exemplary screen image of an operator/clinician home screen 50 of the head-mounted display system 100 is shown in
Turning to
In the first set of training routines carried out by the illustrative head-mounted display system 100, the user input device comprises one or more hand controllers 34, 36 of the head-mounted visual display device 30, and the input signal to the at least one data processing device (i.e, first data processing device and/or second data processing device) comprises one or more hand control signals outputted by the one or more hand controllers 34, 36. The one or more hand control signals are generated based upon a hand movement of the user. The at least one data processing device is configured to receive the one or more hand control signals that are generated based upon the hand movement of the user, and to control the movement of at least one displaceable visual object on the output screen 32 of the head-mounted visual display device 30 towards at least one visual target.
In particular, in one subset of the first set of illustrative training routines, the visual scene on the visual display device 32 of the head-mounted display 30 is designed to assess the user's ability to align a rod with respect to the gravitational vertical (0°) or horizontal by using the hand controller 34, 36 or middle finger trigger of the controller. The objective is for the user to use the controller 34, 36 to align the rod with respect to the gravitational vertical (SVV and R&F) or horizontal (SVH) in a fully immersive virtual reality (VR) scene. In these training routines, the at least one data processing device is configured to determine how closely the user is able to align the at least one displaceable visual object (e.g., the rod) relative to the at least one visual target (gravitational vertical or horizontal). During the training routines, the user (e.g., the patient) is asked to use the right and/or left controllers 34, 36 to position the rod at the gravitational vertical or horizontal. He or she will submit their response by using the index trigger button 38. Once the response is submitted, the correct rod angle will display. Both rods will then disappear, and a new rod will show. Various levels of difficulty can be implemented with optokinetic and visual flow options.
In the illustrative embodiment, the operator can choose from different optokinetic scenes that will be used in the background of the rod (e.g., stripped or starfield). Also, the operator can define the color of the rod (e.g., black, red, or green). The operator additionally can define the direction of the optokinetic movement (e.g., up, down, left, right), the speed of the optokinetic movement (e.g., speed ranges from 0-25 in increments of 5), the density of the optokinetic scene (e.g., low, medium, or high).
In the illustrative embodiment, the operator can choose the type of visual flow scene that will be used in the background of the rod (e.g., park, boardwalk, or driving). Also, the operator can define the speed of the scene's movement (e.g., slow, medium, or fast).
In another one of the first set of illustrative training routines, the user is positioned either sitting or standing, with the headset 30 on and the controllers 34, 36 in the correct hands. The user is shown a sequence of items 144, 146 sitting on a grocery store shelf 142 (see screen image 140 in
In yet another one of the first set of illustrative training routines, the user is positioned either sitting or standing, with the headset 30 on and the controllers 34, 36 in the correct hands. The user is shown several items 154, 156 on a grocery shelf 152 at the same time (see screen image 150 in
In still another one of the first set of illustrative training routines, the user is either sitting or standing with the headset 30 on and the controllers 34, 36 in the correct hands. He or she is shown a grocery store shelf 162 with an item 164 on it that will be randomly highlighted in one of two colors (see screen image 160 in
In yet another one of the first set of illustrative training routines, the user is either sitting or standing with the headset 30 on and the controllers 34, 36 in the correct hands. A grocery store shelf scene will be displayed on the headset 30 (see screen image 170 in
With reference to the screen image 180 in
In the second set of training routines carried out by the illustrative head-mounted display system 100, the user input device comprises a force measurement assembly 10 (see e.g., user 40 disposed on dual force plate 10) and the input signal comprises one or more measurement signals outputted by one or more force measurement devices of the force measurement assembly. The one or more measurement signals are generated based upon the user's contact with a surface of the force measurement assembly 10. The at least one data processing device is configured to receive the one or more measurement signals that are generated based upon the user's contact with the surface of the force measurement assembly 10 and to compute one or more numerical values using the one or more measurement signals. The data processing device is configured to control the movement of the at least one displaceable visual object on the output screen 32 of the head-mounted visual display device 30 towards the at least one visual target by using the one or more computed numerical values.
In the second set of training routines carried out by the illustrative head-mounted display system 100, the center of pressure (center-of-gravity) of the user, which is determined by the at least one data processing device from the force and moment output data of the force measurement assembly 10 described above, is used to control the displacement of a visual object (e.g., a cursor) on the output screen 32 of the head-mounted visual display device 30.
In particular, in one subset of the second set of illustrative training routines, the user is asked to maintain balance throughout a moving scene while having the option of dodging objects by shifting his or her center-of-gravity (COG), answering simple questions, various distractions, and noise distractions. In this training routine, the at least one data processing device is configured to generate and display a displaceable scene (see e.g.,
In another subset of the second set of illustrative training routines, the user is positioned on the force/balance plate 10 in the correct position and the headset 30 is placed on the user's head. The operator/clinician then chooses one of the quick training protocols and chooses one of the seven scene options. Each protocol has a different target area, and the user is to stay on the balance plate 10 and shift his or her weight towards the plurality of targets being displayed on the headset 30 (see e.g.,
An exemplary operator/clinician screen 108 for the fountain quick training scene is depicted in
In the third set of training routines carried out by the illustrative head-mounted display system 100, the user input device comprises a head position sensing device (e.g., an inertial measurement unit for measuring a user's head position, rotation, velocity, and acceleration), and the input signal comprises one or more measurement signals outputted by the head position sensing device. The one or more measurement signals are generated based upon a head movement of the user, and the at least one data processing device is configured to determine a head angle of the user based upon the one or more measurement signals outputted by the head position sensing device. In the illustrative embodiment, the head-mounted visual display device 30 may comprise the head position sensing device (i.e., the head-mounted visual display device 30 may comprise the inertial measurement unit disposed therein).
In particular, in one subset of the third set of illustrative training routines, using a rod on the screen, a user must align his or her head with the rod and use the index trigger on the hand controller 34, 36 to submit his or her response when he or she is aligned. In the illustrative embodiment, the rod will disappear on submission and a new rod will appear at a new angle, at which point the user will align with the new rod. This routine continues for the number of trials selected. The objective in this training routine is for the user to tilt his or her head (+/−45 degrees max) to align with the rod or scene inside the headset 30. The visual scene is designed for the user to align his or her head with the rod tilt angle or scene that appears on the screen 32 of the headset 30 while the user is sitting or standing. In the illustrative embodiment, there are three (3) separate Head Tilt Response (HTR) training protocols: (i) head tilt response (HTR), (ii) head tilt response visual flow (HTR-VF), and (iii) head tilt response optokinetics (HTR-OPK). An example of head tilt response optokinetics is shown in
Next, referring to
In a further illustrative embodiment, the head-mounted display system 100 may include headset data collection and synthesis functionality. In particular, during the specific training/test protocols, the headset 30 records and transmits position and rotation values, along with velocity (e.g., now at 5 degrees left and moving at 2.3 degrees per second). The headset 30 also reports eye tracking movement and gaze lingering, along with hand controller position/rotation/velocity. This data from the headset 30 is sampled at a high rate and combined with the force plate data to present a complete ‘picture’ of the user at any given moment in time (the data rate is sampled at 1000 Hz to match the force plate 10). Combining the data together into existing protocols—e.g., Quick Training where the user must follow the dots by shifting body weight—it can be determined if the user is swaying his or her upper body, turning his or her head to look at the target pattern, and/or if his or her eyes are tracing the displaceable cursor 128, 138 as it moves into the target (or if he or she looks at the target first then move). The visual flow scenes, such as the boardwalk and forest scenes, provide additional opportunities for data synthesis with the eye tracking the various distractors, such as the birds flying through the forest in the forest scene
In the third set of illustrative training routines, the inertial measurement unit (IMU) forming the head position sensing device in the headset 30 may comprise a triaxial (three-axis) accelerometer sensing linear acceleration a′, a triaxial (three-axis) rate gyroscope sensing angular velocity {right arrow over (w)}′, a triaxial (three-axis) magnetometer sensing the magnetic north vector and a central control unit or microprocessor operatively coupled to each of accelerometer, gyroscope, and the magnetometer.
Next, an illustrative manner in which the at least one data processing device of the head-mounted display system 100 performs the inertial measurement unit (IMU) calculations will be explained in detail. In particular, this calculation procedure will describe the manner in which the orientation and position of the head of the user could be determined using the signals from the inertial measurement unit (IMU) of the system 100. As explained above, in the illustrative embodiment, the inertial measurement unit may include the following three triaxial sensor devices: (i) a three-axis accelerometer sensing linear acceleration a′, (ii) a three-axis rate gyroscope sensing angular velocity {right arrow over (w)}′, and (iii) a three-axis magnetometer sensing the magnetic north vector {right arrow over (n)}′. The inertial measurement unit senses in the local (primed) frame of reference attached to the IMU itself. Because each of the sensor devices in the IMU is triaxial, the vectors {right arrow over (a)}′, {right arrow over (ω)}′, {right arrow over (n)}′ are each 3-component vectors. A prime symbol is used in conjunction with each of these vectors to symbolize that the measurements are taken in accordance with the local reference frame. The unprimed vectors that will be described hereinafter are in the global reference frame.
The objective of these calculations is to find the orientation {right arrow over (θ)}(t) and position {right arrow over (R)}(t) in the global, unprimed, inertial frame of reference. Initially, the calculation procedure begins with a known initial orientation {right arrow over (θ)}0 and position {right arrow over (R)}0 in the global frame of reference.
For the purposes of the calculation procedure, a right-handed coordinate system is assumed for both global and local frames of reference. The global frame of reference is attached to the Earth. The acceleration due to gravity is assumed to be a constant vector g. Also, for the purposes of the calculations presented herein, it is presumed the sensor devices of the inertial measurement unit (IMU) provide calibrated data. In addition, all of the signals from the IMUs are treated as continuous functions of time. Although, it is to be understood the general form of the equations described herein may be readily discretized to account for IMU sensor devices that take discrete time samples from a bandwidth-limited continuous signal.
The orientation {right arrow over (θ)}(t) is obtained by single integration of the angular velocity as follows:
where {right arrow over (Θ)}(t) is the matrix of the rotation transformation that rotates the instantaneous local frame of reference into the global frame of reference.
The position is obtained by double integration of the linear acceleration in the global reference frame. The triaxial accelerometer of the IMU senses the acceleration {right arrow over (a)}′ in the local reference frame. The acceleration {right arrow over (a)}′ has the following contributors: (i) the acceleration due to translational motion, (ii) the acceleration of gravity, and (iii) the centrifugal, Coriolis and Euler acceleration due to rotational motion. All but the first contributor has to be removed as a part of the change of reference frames. The centrifugal and Euler accelerations are zero when the acceleration measurements are taken at the origin of the local reference frame. The first integration gives the linear velocity as follows:
where 2{right arrow over (ω)}×{right arrow over (v)}′(t) is the Coriolis term, and where the local linear velocity is given by the following equation:
{right arrow over (v)}(t)={right arrow over (Θ)}−1(t){right arrow over (v)}(t) (7)
The initial velocity {right arrow over (v)}0 can be taken to be zero if the motion is being measured for short periods of time in relation to the duration of Earth's rotation. The second integration gives the position as follows:
At the initial position, the IMU's local-to-global rotation's matrix has an initial value {right arrow over (Θ)}(0)≡{right arrow over (Θ)}0. This value can be derived by knowing the local and global values of both the magnetic north vector and the acceleration of gravity. Those two vectors are usually non-parallel. This is the requirement for the {right arrow over (Θ)}0 ({right arrow over (g)}′, {right arrow over (n)}′, {right arrow over (g)}, {right arrow over (n)}) to be unique. The knowledge of either of those vectors in isolation gives a family of non-unique solutions {right arrow over (Θ)}0({right arrow over (g)}′, {right arrow over (g)}) or {right arrow over (Θ)}0({right arrow over (n)}′, {right arrow over (n)}) that are unconstrained in one component of rotation. The {right arrow over (Θ)}0 ({right arrow over (g)}′, {right arrow over (n)}′, {right arrow over (g)}, {right arrow over (n)}) has many implementations, with the common one being the Kabsch algorithm. As such, using the calculation procedure described above, the at least one data processing device of the system 100 may determine the orientation {right arrow over (θ)}(t) and position {right arrow over (R )}(t) of one or more body portions of the user. For example, the orientation of the head of the user may be determined by computing the orientation {right arrow over (θ)}(t) and position {right arrow over (R)}(t) of two points on the head of the user (i.e., at the respective locations of two inertial measurement units (IMUs) disposed on the head of the user).
In one or more alternative embodiments, rather than using an inertial measurement unit (IMU) that includes an accelerometer, a gyroscope, and a magnetometer, a single accelerometer may be used to simply measure the displacement of the head of the user (e.g., by using equation (8) described above). As explained above, the acceleration output from the accelerometer may be integrated twice in order to obtain the positional displacement of the head of the user.
It is readily apparent that the illustrative head-mounted display system 100 described above offers numerous advantages and benefits. For example, the head-mounted display system 100 is very beneficial for assessing balance and mobility. As another example, the head-mounted display system 100 is useful for enhancing the visual motor performance of individuals. As yet another example, the head-mounted display system 100 may be used as part of the rehabilitation regime for an orthopedic and/or neurological injury.
While reference is made throughout this disclosure to, for example, “an illustrative embodiment”, “one embodiment”, or a “further embodiment”, it is to be understood that some or all aspects of these various embodiments may be combined with one another as part of an overall embodiment of the invention. That is, any of the features or attributes of the aforedescribed embodiments may be used in combination with any of the other features and attributes of the aforedescribed embodiments as desired.
Although the invention has been shown and described with respect to a certain embodiment or embodiments, it is apparent that this invention can be embodied in many different forms and that many other modifications and variations are possible without departing from the spirit and scope of this invention.
Moreover, while exemplary embodiments have been described herein, one of ordinary skill in the art will readily appreciate that the exemplary embodiments set forth above are merely illustrative in nature and should not be construed as to limit the claims in any manner. Rather, the scope of the invention is defined only by the appended claims and their equivalents, and not, by the preceding description.
Number | Name | Date | Kind |
---|---|---|---|
6038488 | Barnes et al. | Mar 2000 | A |
6113237 | Ober et al. | Sep 2000 | A |
6152564 | Ober et al. | Nov 2000 | A |
6295878 | Berme | Oct 2001 | B1 |
6354155 | Berme | Mar 2002 | B1 |
6389883 | Berme et al. | May 2002 | B1 |
6936016 | Berme et al. | Aug 2005 | B2 |
8181541 | Berme | May 2012 | B2 |
8315822 | Berme et al. | Nov 2012 | B2 |
8315823 | Berme et al. | Nov 2012 | B2 |
D689388 | Berme | Sep 2013 | S |
D689389 | Berme | Sep 2013 | S |
8543540 | Wilson et al. | Sep 2013 | B1 |
8544347 | Berme | Oct 2013 | B1 |
8643669 | Wilson et al. | Feb 2014 | B1 |
8700569 | Wilson et al. | Apr 2014 | B1 |
8704855 | Berme et al. | Apr 2014 | B1 |
8764532 | Berme | Jul 2014 | B1 |
8847989 | Berme et al. | Sep 2014 | B1 |
D715669 | Berme | Oct 2014 | S |
8902249 | Wilson et al. | Dec 2014 | B1 |
8915149 | Berme | Dec 2014 | B1 |
9032817 | Berme et al. | May 2015 | B2 |
9043278 | Wilson et al. | May 2015 | B1 |
9066667 | Berme et al. | Jun 2015 | B1 |
9081436 | Berme et al. | Jul 2015 | B1 |
9168420 | Berme et al. | Oct 2015 | B1 |
9173596 | Berme et al. | Nov 2015 | B1 |
9200897 | Wilson et al. | Dec 2015 | B1 |
9277857 | Berme et al. | Mar 2016 | B1 |
D755067 | Berme et al. | May 2016 | S |
9404823 | Berme et al. | Aug 2016 | B1 |
9414784 | Berme et al. | Aug 2016 | B1 |
9468370 | Shearer | Oct 2016 | B1 |
9517008 | Berme et al. | Dec 2016 | B1 |
9526443 | Berme et al. | Dec 2016 | B1 |
9526451 | Berme | Dec 2016 | B1 |
9558399 | Jeka et al. | Jan 2017 | B1 |
9568382 | Berme et al. | Feb 2017 | B1 |
9622686 | Berme et al. | Apr 2017 | B1 |
9763604 | Berme et al. | Sep 2017 | B1 |
9770203 | Berme et al. | Sep 2017 | B1 |
9778119 | Berme et al. | Oct 2017 | B2 |
9814430 | Berme et al. | Nov 2017 | B1 |
9829311 | Wilson et al. | Nov 2017 | B1 |
9854997 | Berme et al. | Jan 2018 | B1 |
9916011 | Berme et al. | Mar 2018 | B1 |
9927312 | Berme et al. | Mar 2018 | B1 |
10010248 | Shearer | Jul 2018 | B1 |
10010286 | Berme et al. | Jul 2018 | B1 |
10085676 | Berme et al. | Oct 2018 | B1 |
10117602 | Berme et al. | Nov 2018 | B1 |
10126186 | Berme et al. | Nov 2018 | B2 |
10216262 | Berme et al. | Feb 2019 | B1 |
10231662 | Berme et al. | Mar 2019 | B1 |
10264964 | Berme et al. | Apr 2019 | B1 |
10331324 | Wilson et al. | Jun 2019 | B1 |
10342473 | Berme et al. | Jul 2019 | B1 |
10390736 | Berme et al. | Aug 2019 | B1 |
10413230 | Berme et al. | Sep 2019 | B1 |
10463250 | Berme et al. | Nov 2019 | B1 |
10527508 | Berme et al. | Jan 2020 | B2 |
10555688 | Berme et al. | Feb 2020 | B1 |
10646153 | Berme et al. | May 2020 | B1 |
10722114 | Berme et al. | Jul 2020 | B1 |
10736545 | Berme et al. | Aug 2020 | B1 |
10765936 | Berme et al. | Sep 2020 | B2 |
10803990 | Wilson et al. | Oct 2020 | B1 |
10853970 | Akbas et al. | Dec 2020 | B1 |
10856796 | Berme et al. | Dec 2020 | B1 |
10860843 | Berme et al. | Dec 2020 | B1 |
10945599 | Berme et al. | Mar 2021 | B1 |
10966606 | Berme | Apr 2021 | B1 |
11033453 | Berme et al. | Jun 2021 | B1 |
11052288 | Berme et al. | Jul 2021 | B1 |
11054325 | Berme et al. | Jul 2021 | B2 |
11074711 | Akbas et al. | Jul 2021 | B1 |
11097154 | Berme et al. | Aug 2021 | B1 |
11158422 | Wilson et al. | Oct 2021 | B1 |
11182924 | Akbas et al. | Nov 2021 | B1 |
11262231 | Berme et al. | Mar 2022 | B1 |
11262258 | Berme et al. | Mar 2022 | B2 |
11301045 | Berme et al. | Apr 2022 | B1 |
11311209 | Berme et al. | Apr 2022 | B1 |
11321868 | Akbas et al. | May 2022 | B1 |
11337606 | Berme et al. | May 2022 | B1 |
11348279 | Akbas et al. | May 2022 | B1 |
11458362 | Berme et al. | Oct 2022 | B1 |
11521373 | Akbas et al. | Dec 2022 | B1 |
11540744 | Berme | Jan 2023 | B1 |
20030216656 | Berme et al. | Nov 2003 | A1 |
20080228110 | Berme | Sep 2008 | A1 |
20110277562 | Berme | Nov 2011 | A1 |
20120266648 | Berme et al. | Oct 2012 | A1 |
20120271565 | Berme et al. | Oct 2012 | A1 |
20150096387 | Berme et al. | Apr 2015 | A1 |
20160245711 | Berme et al. | Aug 2016 | A1 |
20160334288 | Berme et al. | Nov 2016 | A1 |
20180024015 | Berme et al. | Jan 2018 | A1 |
20190078951 | Berme et al. | Mar 2019 | A1 |
20200139229 | Berme et al. | May 2020 | A1 |
20200408625 | Berme et al. | Dec 2020 | A1 |
20210333163 | Berme et al. | Oct 2021 | A1 |
20220178775 | Berme et al. | Jun 2022 | A1 |