The present application generally relates to the field of motion tracking an analysis of a subject or object.
Recent progress in image processing allows to track the movement of a subject, in particular a human body or parts thereof, over time by means of a camera sensor and a computing device.
For instance, Microsoft's Kinect™ sensor includes an RGB video camera, microphones and an infrared sensor and allows to obtain depth images. It further allows to perform gesture recognition, speech recognition, and body skeletal detection. It is commonly used to mediate an interaction between a human and a computing device, for instance in context of gaming, and for unobtrusive movement analysis of a subject and in particular of a human body.
The evolution of sensor technology has also enabled the healthcare community to use digital tools to generate real-world data and real-world evidence. Accurate assessments of spatial and temporal movement characteristics have important applications, e.g. in the field of physiotherapy, fitness and health, but not limited thereto. Applications include to track the extent of a subject's movement, determine an accuracy at which the subject performs certain predefined movements, and the assessment of how the movement of a subject repeatedly performing the same movement evolves over time. The analysis can be tailored to specific subject, e.g. a person. Such personalized analysis can e.g. adapt to properties of the body of a specific person and movement capabilities of the tracked person.
In the field of motion analysis for fitness and health, it is common to use machine learning (ML) technology to measure and analyze movements of a person in a fast an accurate manner with a 2-dimensional camera-based system. However, there are several challenges and problems with this approach: Many repetitions of a specific movement are required to create and train a ML model accurate and personalized tracking of a person's body is possible.
Conventional motion analysis for fitness and health requires significant computational effort for analyzing input data to match threshold parameters, which can be time consuming and may not be easily done in real-time with resources available on the mobile computing device and therefore can require cloud computing services.
Moreover, conventional camera systems used for implementing the tracking require good, i.e., in terms of visibility, light conditions and sufficient contrast to distinguish subject, clothes worn by the subject, and environment.
These and other problems of conventional technologies in the field of motion tracking an analysis of a subject are solved by technologies disclosed herein and summarized in the following.
A simplified summary of some embodiments of the disclosure are provided in the following to give a basic understanding of these embodiments and their advantages. Further embodiments and technical details are given in the detailed description presented below.
According to an embodiment, a computer-implemented method for movement tracking of a subject, the method comprising the steps of: obtaining depth image data of a subject's body during a predefined time period; tracking locations of a plurality of anatomical landmarks of the body based on the depth image data; tracking a location of a predefined point in an environment, in which the subject is moving; adjusting the location of one or more of the plurality of anatomical landmarks relative to the location of the predefined point to obtain adjusted locations; determining a movement of the subject's body based on the adjusted locations during the predefined time period. This, and in particular the step of adjusting the position of the tracked locations, has the advantage that a more precise tracking of movement of anatomical landmarks relative to each other is possible.
In some embodiments, the environment is a real environment, wherein the predefined point is an anchor point of a virtual environment superimposed on the real environment, and wherein the adjusted locations correspond to a position of the subject's body in the virtual environment.
In some embodiments, the predefined point corresponds to one of the anatomical landmarks or a tag attached to the subject's body and suitable to be recognized in the depth image data, so that locations in the virtual environment follow a motion of the location of the one of the anatomical landmarks. This has the advantage that possibly unintended or unavoidable physical movement or jitter of the subject does not lead or impact a movement of the subject tracked in the virtual space, since the virtual space i.e. an augmented reality space for therapeutic purposes, is locked to the subject or a subject who does the therapeutic exercise. A further advantage results as the movement determination of the subject in the virtual space is robust against movement of the depth image camera, i.e., the results of the movement determination of the subject are not affected by movement of the depth camera during recording the depth image data on which the determination of the movement of the subject is based. The aspect is especially useful if the depth sensor is integrated in a mobile device that may move itself. In other words, movement of the mobile device may be distinguished from movement of the subject from which the depth image data is recorded.
In some embodiments, the predefined point is a fixed location in the real environment, the fixed location being preferably a geolocation or a position of a tag fixed to the real environment and suitable to be recognized in the depth image data. As advantage of this further aspect is also that the results of the determination of movement of the subject are not affected by movement of the depth camera during recording of the depth image data.
In some embodiments, the virtual environment is an augmented reality environment.
In some embodiments, such augmented reality environment includes a virtual object that the subject can interact with, wherein, in the augmented reality environment and based on the adjusted locations and a location of the virtual object, it is determined whether the subject interacts with the virtual object. If an interaction is determined an indication of the interaction is signaled to the subject.
In some embodiments, the augmented reality environment includes a virtual guiding object. In the augmented reality environment and based on the adjusted locations and a location of the virtual guiding object, a virtual distance between a subject's body part corresponding to one or more of the adjusted locations and the virtual guiding object is determined and in response to the determination, an indication of the virtual distance to the subject is signaled.
In some embodiment, the augmented reality environment includes a virtual target movement path. During the predefined time period and based on the adjusted locations and locations of the target movement path in the augmented reality environment, a deviation of the subject's body part corresponding to one or more of the adjusted locations and the target movement path is determined and, in response to the determination, an indication of the deviation is signaled to the subject, for example in a visual or audible manner.
In some embodiments, a combined image of the real environment and the superimposed virtual environment is rendered in real-time and said combined image is output for display.
The technology disclosed herein can also be embodied in a computing device comprising a motion sensor, a camera, and a processor communicatively coupled to a display, wherein the processor is adapted to perform the steps of a method for movement tracking of a subject as outlined hereinbefore.
In some embodiments, the depth image data is obtained using a LIDAR sensor, and the LiDAR sensor. This aspect has the advantage that tracking locations of a plurality of anatomical landmarks works well also under poor lightning and contrast conditions.
In some embodiments, the computing device is a mobile computing device.
The technology disclosed herein can also be embodied in a computer-readable medium comprises instructions which, when executed by a computer, cause the computer to carry out the method for movement tracking of a subject according to the methods specified hereinabove.
The present technology provides one or more the following advantages:
An advantage is that no ML model training is required upfront. Instead of tedious training every time for each therapeutic exercise a new ML model, an artificial environment is created as described below as a therapeutic virtual environment (TVE) or virtual object.
Virtual objects are shown as part of the TVE on a screen. The virtual objects are useful for establishing the tracked person's orientation, e.g., to enable the person improving the precision of an exercise or to serve as a motivation for the person. Unlike conventional technology, adjustments of a size and distance of such objects in a virtual space shown on the screen can be more easily adjusted, so that the display of the object as perceived by the person is more realistic. More generally, approaches that do not link an exercise/training object/device to the position of the person performing the exercise, suffers from imprecision when the person begins to move away from the initial position of the person when starting the exercise. Such imprecision can deteriorate the user's perception of the virtual reality environment as a real environment, and further can deteriorate the accuracy with which the movement of the person and distances of the person's body or parts thereof to the objects in the virtual reality space show on the screen are determined.
Increasingly strenuous exercises, i.e. exercise is more challenging for the person to carry out, can be easily programmed into the system, e.g. a (virtual) wall which has to be touched by a patient can be gradually move further away over time depending on the person's or patient's progress or goals of the exercises.
The technology disclose herein allows indirect, i.e., object-driven visual feedback, because a person or a patient can see on a screen not only what s/he is doing but also see herself in relation to a defined endpoint, e.g., a virtual wall, which avoids providing feedback to the person by other means, e.g., signals, and instructor's voice or displayed text, which is prone to become monotonous or distracting.
The augmented reality environment (ARE) can be a therapeutic virtual environment (TVE) which minimizes effects of an inadvertent position change or cheating by a person or patients in the exercises performed because object distances are linked and dependent on each other. For example, moving closer to the virtual wall will not make it easier for the person to reach the wall, since the wall moves with the person does the virtual environment which includes the wall, is linked the person's body or a part thereof.
TVEs are once defined (sketched) and get adjusted to (and by) the patient on the fly while executing exercises based on instruction thereby defining paths and endpoints or are set up by exploiting parameters derived from the patient (physical like height, armlength etc. and/or some capability parameters associated with a disease or stage thereof.
Using a device capable of generating 3D data, e.g., LiDAR, the TVE space can be used in all dimensions rather than being significantly limited to the coronal plane with regard to the lens when using a 2D camera system. Moreover, the use of a LIDAR sensor and the three-dimensional tracking it enables, allows for tracking therapeutic exercises in the z-direction (sagittal). Conventional technology, however, tracks predominantly only movement in x-y direction (coronal). The technology disclosed herein therefore provides and supports a broader range of therapeutic exercises.
The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
The foregoing summary as well as the following detailed description of preferred embodiments are better understood when read in conjunction with the append drawings. For illustrating the invention, the drawings show exemplary details of systems, methods, and experimental data. The information shown in the drawings are exemplary and explanatory only and are not restrictive of the invention as claimed. In the drawings:
The present disclosure relates to methods and systems for a computer-implemented tracking and analysis of movement of a subject.
The subject can be a person, but the technology is not limited to human subjects. More generally, a subject may also be an animal. The subject may also be a moving object, such as a robot. For the sake of simplicity and illustration, the subject is referred to as “person” in the following but should not be is not limited thereto unless stated otherwise.
The methods and systems described herein are suitable to for supporting a broad range therapeutic exercises and assessing a person's performance and movement. Specifically, support is given by providing a virtual environment, which augments the real environment surrounding the person and which is thus an augmented reality environment (ARE).
The method 100 starts with obtaining 110 depth image data of a subject's body during a predefined time period. The depth image data is obtained, for example by conventional depth sensing cameras available in mobile devices. Alternatively, or in addition, a LIDAR sensor-based depth sensing mechanism may be employed. Depth image data includes for each pixel of an image besides color information also depth information in terms of distance from the depth sensor or camera. Depth image data is obtained in step 110 through the motion sensor from the person's body during a predefined time period. A predefined time period can be in the order of (sub-) seconds or minutes, depending on the duration of an exercise or movement. For example, such a predefined time period can correspond to the physical response time of an organism, which can be within few 100 milliseconds. The motion sensor may capture depth image data during the predefined time period, for example at a rate a rate of 30 Hz and provide the same to the processor for analysis. For LiDAR-based sensors, this rate can be higher and in the order of a few hundred Hz. Obtaining image data, providing the data to the processor and further analysis of the data as described below can occur in a pipeline or streamed manner such that results of the movement tracking and analysis can be provided in real-time or quasi real-time with slight delay.
The method continues by tracking in step 120 locations of a plurality of anatomical landmarks of the body based on the depth image data. Anatomical landmarks of a subject are described in more detail below and illustrated in the context of
The analysis of depth data captured during the predefined time period provides motion information for each individual anatomic landmark, and more specifically each skeleton joint, during the predefined time period. For example, using the SDK, information is provided, for each anatomic landmark, about the spatial position of the landmark at each time of a frame capture during the predefined time period.
The method continues in step 130 by tracking a location of a predefined point in an environment, in which the person is moving. Naturally, all movements of the person occur in the physically reality which provides a physical environment like the room and surroundings in which the person is exercising and moving. This kind of environment is referred to a “real” or “physical” environment herein. For example, the location of anatomical landmarks of the person are initially determined in the real environment. The term environment as used herein is however not limited to this physical, i.e., real, environment and may also relate to a virtual environment that is superimposed on the real environment, such virtual environment being an augmented reality environment, and more specifically called therapeutic virtual environment (TVE) when used for therapeutic purposes. The predefined point that is tracked in step 130 can, e.g., correspond to an anatomical landmark of the person, which may move over time if the person moves, a static location in the real environment such as a geolocation, which does not move over time, or a current position of some moving object in the real environment.
The predefined point is also referred to as “anchor point” of the virtual environment so that the virtual environment is “anchored” in different ways to the real environment. This is discussed in more detail in the following and illustrated in the accompanying
The method continues with step 140 by adjusting the location of one or more of the plurality of anatomical landmarks 310 relative to the location of the predefined point to obtain adjusted locations. The location of each anatomical landmark is initially determined in the real environment. In some implementations, the adjustment of a coordinate of anatomical landmark means that a difference is computed between the spatial coordinate of the landmark and the spatial coordinate of the anchor point to obtain an adjusted coordinate. In other words, the adjusted coordinate can be understood as a coordinate relative to the anchor point, i.e. a coordinate in a coordinate system with the same orientation in x, y, z direction as an original coordinate system but where the origin is moved to the anchor point. Such adjustment is computed for the position of each landmark and anchor point over the predefined time. The adjustment is intended to determine from the movement of the person in the physical space, movement of the person in the virtual space. As will be discussed in detail in the following, this is advantageous since it enables to track movement of the person in the anchored virtual environment and facilitates to assess movement of the subject's anatomical landmarks relative to each other. For example, if anatomical landmarks intended to be tracked move in the physical environment by the same distance and direction as an anchor point of a virtual environment, then, a zero movement (no movement) is detected based on adjusted coordinates of said anatomical landmarks intended to be tracked and relative to said virtual environment anchored at said anchor point.
The method continues in step 150 by determining a movement of the subject's body based on the adjusted locations during the predefined time period. In that manner and based on the determined movement, feedback can be provided to the person about the movement, such as, e.g., visual guidance of the movement and feedback about a deviation of the movement from a desired movement and the degree of success of some exercise involving the movement. Movement is specified, for example, as a measure of a distance, a curve of movement of an anatomical landmark over time and the length of this curve, an angle, or a ratio. E.g. when a person lifts an arm to the side (coronal move) this could be the increasing distance of the wrist from the hip, or the angle between arm and body or when the arm is moved sagitally, i.e. in the y-z plane of the coordinate system 460 shown in
The motion sensor 220 can be a three-dimensional motion sensor and may include, for example, an infrared depth camera or LiDAR sensor. motion sensors may be combined with a camera 240. Examples of commercial sensors are Microsoft's Kinect™ sensor and Intel RealSense sensor. Other examples include LiDAR-based depth sensors, e.g., available in some of Apple's iPhone™. These and other sensors are marker-fee 3D motion sensors, which means that they do, not require that the person, for which movement is to be determined, is not required to wear body markers, which simplifies the test setup.
The motion sensor 220 is connected to computing device 210 that includes a processor 230, configured to run software, which is typically based on a software development kit, SDK, provided with the motion sensor 220 to obtain and process data from the motion sensor 220. In that manner, the processor 230 is capable to obtain from data corresponding to individual depth images information about the location of anatomic landmarks of the person's body 250. The real time locations of anatomical landmarks can for instance be obtained with the Microsoft Software Development Kit or other software packages based on commercially available depth cameras such as Microsoft Kinect™.
The motion sensor 220 and the computing device 210 may be integrated in a single device or they may be separate devices that are communicatively coupled.
The system may further include a display 270 communicatively coupled to the processor. The display may be included in the computing device as, e.g., is the case with laptop or smartphone, or may be a separate device. The connection between the processor and the display may be via wire as shown in
It is not required to visually perceive the virtual environment, i.e., the augmented reality environment, as a self-cognition. While a VR headset is usable by person as a display 270, even a TV screen can be used for displaying the augmented reality environment to the person. VR headsets are not preferable with older patients.
The computing device 200 may further include a data store, which is not shown in the Figure, to record of motion data obtained over a longer period of time during a plurality of exercises and movement of the same or different persons. Analysis of historical movement data and information derived through further analysis thereof, can, for example, be used to assess progression of movement disorder, the effect of physiotherapy, or other forms of therapy, over time.
In some examples anatomical landmarks can be skeleton joints, also called joints or join points of the person's body, or body points. The kind of landmarks available is typically defined by the SDK provided with the motion sensor 220. For example, the motion sensor 220 in combination with an SDK can provide three-dimensional locations of skeleton joints in real-time. Skeleton joints are discussed in the context of
The person's body 450 may be located at a predetermined distance dist to the motion sensor. This distance 480 between the person 450 and the motion sensor 410 can depend on the type and characteristics of the motion sensor 410 and is typically between 0.8 to 4 meters. For a typical scenario, the whole body is in the field of view of the motion sensor 410, which is commonly of about 2.5 to 4 meters away from the motion sensor 410.
The motion sensor 410 is preferably positioned at a predetermined height h. This height 470 above the ground is, for example, around 0.8 meter, which is the typical height when the motion sensor is positioned on a table, or 0.2 to 0.3 meter if the motion sensor or the device including the motion sensor is positioned on the floor.
The present technology creates and provides an artificial environment, i.e., the augmented reality environment (ARE), e.g., as a therapeutic virtual environment (TVE). This environment is used to define movements, e.g., stretching a hand outside of a sphere, as shown in the scenario in
One aspect of the disclosed approach is that the TVE is anchored to a spot linked with the patient, such spot being also called anchor point. The patient is typically moving, with movement related to a therapeutic exercise and possibly movement unrelated thereto. The disclosed technology enables to distinguish both kinds of movements and focus assessment of movement on the movement relate to the therapeutic exercise. This approach enables to create a stable and precise patient-TVE system by defined anchor points. These anchor points are one of the following:
In scenarios (a) and (b), the anchor point being part of or directly linked to the patient creates a very stable system between the TVE and the patient.
In scenario (c) the anchor point is not directly linked with the patient and serves as an external reference point to coordinate the relationship between the patient and the TVE.
Accordingly, the augmented reality environment is anchored. Anchoring of the virtual environment is “relative” in that is it is not tied to the subject. It is an anchoring to a stable reference point. This means, that positions in the augmented reality environment are determined relative to a stable position of the anchor. In other words, the augmented reality environment can move in the physical space corresponding to the movement of its anchor. The choice of anchor point may be premised on the ease of its determination or its calculation. Different scenarios and anchors of an augmented reality environment, or more specifically a TVE, are illustrated in
The device 580 tracks locations of one or more of the plurality of anatomical landmarks of the person's body in the real environment as described before. The TVE 520 is created and maintained as a data structure by software in the device 580. The device maintains the position of virtual object belonging to the TVE 520, which are overlaid on the real environment. In other words, for each object in the TVE 520, the device can compute a position in the real environment and vice versa, i.e., for each anatomical landmarks of the person's body in the real environment, a position in the TVE 520 can be computed.
The device 580 may accordingly compute a visualization of this scenario by overlaying the real environment and the TVE 520 and visualizing the overlay on a display 570 to provide visual feedback to the person 510. In other words, a combined image of the real environment and the superimposed virtual environment is rendered in real-time. The terms “overlaying” and “superimposing” are used interchangeably herein. The rendered combined image can be output for display on a display device 570.
In the following, the terms “right” and “left” should be interpreted as orientation from the perspective of the viewer of
The position of the TVE 520 and, in particular, the virtual environment surrounding the person, which is illustrated in the
In this scenario, the TVE 520 is anchored, via optional vector 540, to the position of the right foot of the person 510 in the real environment. In
The effect of this anchoring of the TVE 520 relative to the position of the right foot of the person 510 can be described on an example as follows: In a therapeutic exercise where the person 510 is to move the right arm and reach out far to the right. To support the exercise, the person may be shown the globe of the TVE 520 on a display, so that the person may reach beyond the border of the globe. Since the position of the globe is anchored to the right foot, it will not be possible for the person to step to the right to facilitate reaching out farther beyond the boundary of the globe and thus cheating on the exercise. The position of the right hand of the person is therefore tracked relative to the position of the right foot. This allows to more accurately assess the movement of the person and makes the assessment of the movement of the person and the degree according to which the person meets the goal of reaching beyond the boundary of the virtual globe independent of the absolute position of the person in the real environment.
In some embodiments, the position of a TVE is initially, e.g. at the start of a therapeutic session or exercise, determined to be in alignment with a certain anatomical landmark of the person performing the exercise, for example the center of body (COB) landmark 615. In other words, the virtual environment 620 is positioned such that the COB is located at a specific coordinate in the virtual environment 620, e.g. at the center of the spherical shape 620. In that manner, a constant displacement vector 640 can be determined as a result, for example, relative to a fixed anchor point, i.e. the tag 630 on the chair 650 in the physical environment. This initial determination could be understood as a calibration of the position of the virtual environment. If, during the exercise, it is determined that the position of the COB landmark 615 moves away farther than a threshold distance from the initial position inside the virtual environment 620, then this can be indicated to the person 610 through a visual or acoustic feedback signal. In such situation, the person 610, possibly inadvertently moved away from the initial physical location, which may result in possible degradation of the movement determination through the depth camera, e.g. if the person is out of focus. At that point, the position of the virtual environment may be determined anew, or, alternatively, the person moves back to the initial position, such that the COB landmark of the person is within a threshold distance from the initial position inside the virtual environment. In that manner, the virtual and physical environments can be kept in alignment, thus enabling to keep the TVE system stable.
In one case, the ball 940 may be a virtual object, anchored to an anatomical landmark corresponding to the right hand of the person. The pillar 920 is a reference object in the TVE anchored to the right foot. Making a step toward the pillar will therefore move the pillar and basket in a manner that maintains the distance between the right foot and the virtual pillar and the basked and will therefore not help the person to reach the basket. Moving only the right arm farther to the basket will not move the pillar and basket and will allow the person to succeed in depositing the virtual ball into the virtual basket 945. The determination of the position of the virtual ball 940 relative to the virtual basket 945 is done by the device 980 and the augmented relativity software running thereon.
More generally, the augmented reality environment, i.e., the TVE 920, includes one or more virtual objects 920 and 945 that a subject 910 can interact with. The device 980 determines in the augmented reality environment and based on the adjusted locations and a location of the one or more virtual objects 920 and 945, whether the subject 910 interacts with the virtual objects ad in response to determining an interaction, an indication of the interaction is signaled to the subject, e.g., via a display 970 or an audible indication 990.
In another case the ball may also be a real object in the real environment that is designed to be tracked by the device 980. For example, such a physical ball can include a tag, not shown in
Device 1080 tracks positions of plurality of anatomical landmarks of the person's body 1010, creates and maintains the TVE, and, e.g., wirelessly 1090, communicates an image of the person and an overlay image including the virtual wall 1020 and guiding object 1040 to a display 1070. The person may use this display 1070 as a guidance to interact with the virtual guiding object 1040.
Aspects of this disclosure can be implemented in digital circuits, computer-readable storage media, as one or more computer programs, or a combination of one or more of the foregoing. The computer-readable storage media can be non-transitory, e.g., as one or more instructions executable by a cloud computing platform and stored on a tangible storage device.
In this specification the phrase “configured to” is used in different contexts related to computer systems, hardware, or part of a computer program. When a system is said to be configured to perform one or more operations, this means that the system has appropriate software, firmware, and/or hardware installed on the system that, when in operation, causes the system to perform the one or more operations. When some hardware is said to be configured to perform one or more operations, this means that the hardware includes one or more circuits that, when in operation, receive input and generate output according to the input and corresponding to the one or more operations. When a computer program is said to be configured to perform one or more operations, this means that the computer program includes one or more program instructions, that when executed by one or more computers, causes the one or more computers to perform the one or more operations.
Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. In the foregoing description, the provision of the examples described, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting embodiments to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments.
Further embodiments are described in the following:
Embodiment 1: A computer-implemented method 100 for movement tracking of a subject, the method comprising the steps of: obtaining 110 depth image data of a subject's body 250 during a predefined time period; tracking 120 locations of a plurality of anatomical landmarks 310 of the body based on the depth image data; tracking 130 a location of a predefined point in an environment, in which the subject is moving; adjusting 140 the location of one or more of the plurality of anatomical landmarks 310 relative to the location of the predefined point to obtain adjusted locations; determining 150 a movement of the subject's body based on the adjusted locations during the predefined time period.
Embodiment 2: The method 100 of embodiment 1, wherein the environment is a real environment, wherein the predefined point is an anchor point of a virtual environment superimposed on the real environment, and wherein the adjusted locations correspond to a position of the subject's body in the virtual environment.
Embodiment 3: The method 100 of embodiment 2, wherein the predefined point 530 corresponds to one of the anatomical landmarks or a tag 530 attached to the subject's body and suitable to be recognized in the depth image data, so that locations in the virtual environment 520 follow a motion of the location of the one of the anatomical landmarks.
Embodiment 4. The method 100 of embodiment 2, wherein the predefined point is a fixed location in the real environment, the fixed location being preferably one of a geolocation or a position of a tag 630 fixed to the real environment and suitable to be recognized in the depth image data.
Embodiment 5: The method 100 of one of embodiment 2 to 4, wherein the virtual environment is an augmented reality environment.
Embodiment 6: The method 100 of embodiment 5, wherein the augmented reality environment includes a virtual object 920 that the subject 910 can interact with; and the method further comprises the steps of: determining, in the augmented reality environment and based on the adjusted locations and a location of the virtual object 920, whether the subject 910 interacts with the virtual object 920; and in response to determining an interaction, signaling an indication 990 of the interaction to the subject (910).
Embodiment 7: The method 100 of embodiment 5 or 6, wherein the augmented reality environment includes a virtual guiding object 1020 and the method further comprises the steps of: determining, in the augmented reality environment and based on the adjusted locations and a location of the virtual guiding object 1020, a virtual distance between a subject's body part corresponding to one or more of the adjusted locations and the virtual guiding object 1020; in response to the determination, signaling an indication 1090 of the virtual distance 1045 to the subject 1010.
Embodiment 8: The method 100 of one of embodiment 5 to 7, wherein the augmented reality environment 1120 includes a virtual target movement path 1140 and the method further comprises the steps of: determining, during the predefined time period and based on the adjusted locations and locations of the target movement path 1140, a deviation of the subject's body part corresponding to one or more of the adjusted locations 1160 and the target movement path 1130; in response to the determination, signaling an indication 1190 of the deviation to the subject 1110.
Embodiment 9: The method 100 of one of embodiment 2 to 8 further comprising the steps: rendering in real-time a combined image of the real environment and the superimposed virtual environment; outputting the combined image for display.
Embodiment 10: The method 100 of one of the preceding embodiments, wherein depth image data is obtained using a LIDAR sensor, and wherein the LiDAR sensor is optionally included in a mobile computing device.
Embodiment 11: A computing device 210, comprising: a motion sensor 220; a camera 240; and a processor 230 communicatively coupled to a display 270 and adapted to perform the steps according to the method of one of embodiments 1 to 9.
Embodiment 12: The computing device 210 of embodiment 10, wherein the motion sensor 220 is a LIDAR sensor.
Embodiment 13: The computing device 210 of embodiment 10 or 11, wherein the computing device 210 is a mobile computing device.
Embodiment 14: A computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method for movement tracking of a subject according to the method of one of embodiments 1 to 10.
Embodiment 15: The method 100 of embodiment 5 or 6, wherein the augmented reality environment includes a virtual reference object having the shape of three two-dimensional planes. The reference object includes a horizontal plane 860, and two vertical planes 850 and 840 orthogonal to each other and orthogonal to the horizontal plane 860. Vertical plane 840 is oriented in sagittal direction towards the person 810.
Embodiment 16: The method 100 of embodiment 15, wherein the reference object, i.e., the ensemble of the three planes, is anchored at a spot half way between the landmarks corresponding to the person's tip of the left 812 and right 815 foot.
Embodiment 17: The method 100 of embodiment 5 or 6, wherein an initial position of the augmented reality environment, e.g. at the start of a therapeutic session or exercise, is determined to be in alignment with a certain anatomical landmark of a subject performing the exercise and wherein the method further includes the steps of: determining a constant displacement vector 640 relative to a fixed anchor point, i.e. the tag 630 on the chair 650 in the physical environment; determining that the position of the certain anatomical landmark 615 moves away farther than a threshold distance from the initial position inside the virtual environment 620; optionally issuing to the person 610 a visual or acoustic feedback signal; determining a new position of the augmented reality environment, wherein the position for the augmented reality environment is determined by defining an anchor point and optionally a displacement vector.
Number | Date | Country | Kind |
---|---|---|---|
21204019.0 | Oct 2021 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/079195 | 10/20/2022 | WO |