The present invention relates to a method and to a device for automatically estimating a body weight of a person, and to a vehicle, in particular a land vehicle, equipped with such a device.
When it comes to determining the body weight of a person, scales of a wide variety of designs have always been used, these being based on determining the body weight on the basis of a weight force exerted by the body of the person on the scales.
In addition to these conventional methods for determining body weight, newer methods are now also known in which the body weight is estimated on the basis of an image-sensor based recording of the person. In one particularly simple embodiment, the height of the person is for this purpose for example estimated from the image obtained using image sensors and an estimated value for the body weight of the person is determined by way of the comparison table, which correlates the height with a body weight typical for said height.
A method for recognizing poses and for the automatic, software-aided classification of different body areas of a person based on a 3D image of the person is described in the article Jamie Shotto et al., “Real-Time Human Pose Recognition in Parts from Single Depth Images”; Microsoft Research Cambridge & Xbox Incubation, February 2016, available on the Internet at https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/BodyPartRecognition.pdf
The present invention is based on the object of further improving the achievable reliability and/or accuracy of a body weight determination for a person on the basis of at least one image of the person captured using image sensors.
The object is achieved according to the teaching of the independent claims. Various embodiments and developments of the invention are the subject matter of the dependent claims.
A first aspect of the invention relates to a method, in particular a computer-implemented method, for automatically estimating a body weight of a person. The method comprises the following method steps: (i) generating or receiving image data that represent an image, captured using image sensors, of at least a partial area of the body of a person by way of pixels; (ii) classifying at least a subset of the pixels based on a classification in which different classes each correspond to a different body area, in particular body part, wherein the pixels to be classified are each assigned to a specific body area of the person and respective confidence values are determined for these class assignments; (iii) for each of at least two of the classes occupied with assigned pixels, calculating a position of at least one reference point determined according to a specification, which reference point may in particular be a specific pixel, for the body area corresponding to this class on the basis of the pixels assigned to this class; (iv) determining a respective distance between at least two of the selected reference points; (v) determining at least one estimated value for the body weight of the person based on a predetermined relationship, in particular a mathematical function, which defines a relationship between different possible distance values and body weight values respectively assigned thereto; and (vi) outputting the at least one estimated value for the body weight of the person and optionally the one or more determined distances or positions of the reference points. In the method, an exclusive selection of those pixels that are used to determine the reference points is additionally made on the basis of the respective confidence values of their class assignments using a first confidence criterion.
“Exclusive selection” here means that the pixels that are not selected based on the confidence criterion due to their respective confidence value are not used to determine the reference points. The same applies accordingly to the “exclusive” selections discussed further below with regard to the variables available for selection or to be determined there.
The relationship for determining the estimated value from the one or more relevant distances may be given in particular in the form of a reference table or a database or a calculation formula, in particular a mathematical function.
The use of confidence values as part of the abovementioned method and the exclusive selection, based thereon, of certain pixels, in particular those with high confidence, may be used to improve the reliability (in particular in terms of reliability or robustness of the method) and accuracy of the at least one estimated value that is ultimately determined and output for the body weight of the person.
If more than one estimated value for the body weight is determined and output, this may take place in particular such that these determined estimated values together define an estimated value range. By way of example, the estimation could thereby deliver the result that the estimated body weight is in the range of 70 kg to 71 kg, wherein the value 70 kg represents a first estimated value (lower limit value) and the value 71 kg represents a second estimated value (upper limit value) in this example. It is also conceivable to ascertain and output yet further estimated values, in particular a mean value (for example 70.5 kg here), as further estimated value.
The output may in particular be in a data or signal format that is suitable for further machine processing or use or on a human-machine interface.
The terms “comprises,” “contains,” “includes,” “has,”, “having” or any other variant thereof as may be used herein are intended to cover non-exclusive inclusion. By way of example, a method or a device that comprises or has a list of elements is thus not necessarily limited to those elements, but may include other elements that are not expressly listed or that are inherent in such a method or such a device.
Furthermore, unless expressly stated otherwise, “or” refers to an inclusive or and not to an exclusive “or”. For example, a condition A or B is satisfied by one of the following conditions: A is true (or present) and B is false (or absent), A is false (or absent) and B is true (or present), and both A and B are true (or present).
The terms “a” or “an” as used here are defined in the sense of “one or more”. The terms “another” and “a further” and any other variant thereof should be understood in the sense of “at least one other”.
The term “plurality” as used here should be understood in the sense of “two or more”.
A few preferred embodiments of the method will now be described below, each of which, unless expressly excluded or technically impossible, may be combined as desired with one another and with the further described other aspects of the invention.
In some embodiments, based on the confidence values of the respective pixel assignments of the pixels used to calculate the positions of the reference points, respective confidence values for these positions are ascertained and an exclusive selection of those positions that are used to determine the distances is made based on the respective confidence values of these positions using a second confidence criterion.
Furthermore, in some of these embodiments, based on the confidence values of the respective positions of the reference points used to calculate the distances, respective confidence values for these distances may also be ascertained and an exclusive selection of those distances that are used to determine the at least one estimated value for the body weight may be made based on the respective confidence values of these distances using a third confidence criterion.
The abovementioned embodiments may each be used in particular to increase the reliability and accuracy of the body weight estimation that is able to be performed using the method.
The first, the second and the third confidence criterion may in particular be identical pairwise or as a whole (advantage of simple implementation) or else may be selected differently (advantage of individual adjustability and optimization of the individual steps). By way of example, one or more of the confidence criteria may be defined by way of a respective confidence threshold that defines a respective minimum confidence value required for the use of the associated variable (pixel, reference point position, distance or estimated value for the body weight) as part of the exclusive selection applicable thereto for the performance of the respective following method step.
In some embodiments, a respective confidence value is ascertained
on the basis of ascertaining a mathematical mean value or extreme value, in particular a minimum value, of the respective confidence values used as input variables in this regard. It is thereby possible to easily determine chains of confidence values that are meaningful and constantly consistent in terms of their confidence statement from the individual confidence values from the respective method steps, which overall deliver usable confidence statements for the at least one estimated value for the body weight that is ultimately determined.
In some embodiments, the position of at least one further reference point that is used to determine a distance and that is not represented by the image data is estimated by extrapolation or interpolation on the basis of other reference points represented by the image data or derived therefrom. It is thereby possible, even in cases in which the image data do not represent a body area relevant for determining the estimated value for the body weight of the person or at least a reference point thereof relevant for this determination of the estimated value, for instance because the reference point lies outside the captured image area or is concealed in the image itself, to still be able to determine the estimated value for the body weight of the person. Such a case may occur in particular if the person adopts a body position that is disadvantageous for the purposes of the method during the image sensor-based capturing of the image data and in the process at least one of the body areas of the person required to determine the estimated value in accordance with the method comes to lie outside the spatial area covered by the captured image. Specifically, in the application case of determining a body weight for a driver or passenger of a vehicle, this may be the case if the driver or passenger leans forward or to the side in their seat, and their posture thus deviates significantly from a normal upright posture on which the image capture is based.
In some of these embodiments, the extrapolation or interpolation takes place on the basis of at least two of the determined reference points located within the image using a body symmetry related to these reference points and the further reference point to be determined (by extrapolation or interpolation). By way of example, a further (third) reference point, which corresponds to a position on one of the two shoulders of the person, may be determined by way of extrapolation or interpolation using the known symmetry property whereby mutually corresponding points (for example the outer end points thereof) of the two shoulders of a person typically have an approximately equal distance from the centrally running body axis, on the basis of knowledge of the positions of the corresponding first reference point on the other shoulder and a second reference point located on the body axis. It is thereby possible to perform particularly reliable determination of the respective position of further reference points on the basis of the utilization of symmetry.
In some of the abovementioned embodiments, the method furthermore comprises checking the plausibility of the position of the further reference point determined by extrapolation or interpolation based on a plausibility criterion. In this case, the plausibility criterion relates to a respective distance between this further reference point and at least one of the calculated reference points not involved in the extrapolation or interpolation. By way of example, a distance between the further reference point determined by way of extrapolation or interpolation and another reference point contained in the image may for this purpose be calculated and compared with an associated value or value range, which corresponds to plausible values for such a distance, in order to check the plausibility of the position of the further reference point. It may in particular then be decided on the basis of this check result whether the reference point is used for the further method, whether it is redetermined in an alternative way or whether it is discarded in favor of another available reference point. This makes it possible to further increase the reliability and/or accuracy of the method and in particular to achieve sufficient reliability and accuracy in many cases, even when the person has adopted a body position that is disadvantageous for the method during the image capture.
In some embodiments, the method furthermore comprises correcting the calculated positions of the reference points by adjusting the calculated positions on the basis of a distance or a perspective from which the image was captured using image sensors. In this case, the distances are determined on the basis of the thus-corrected positions of the reference points. It is thus possible to at least partially compensate for distance-dependent and/or perspective-dependent influences on the captured image, in particular in the sense of normalization to a predefined standard view with a predefined distance and predefined perspective, meaning that the further determination of the estimated value for the body weight of the person is able to take place with less dependence on, ideally independently of, the distance or perspective during the image capture. This makes it possible to further increase the reliability and/or accuracy of the method.
In some embodiments, the method furthermore comprises preprocessing the image data as part of image processing preceding the classification in order to improve the image quality. This image processing may in particular comprise noise suppression (regarding image noise), removal of the image background or parts thereof or removal of other image components irrelevant to the further method steps. The further method steps may thus take place on the basis of image data optimized as part of the image processing, and influences of disruptive or irrelevant image content may be reduced or even eliminated, which in turn may be used to increase the achievable reliability and/or accuracy of the method.
In some embodiments, at least one of the selected reference points for a specific class is determined as or on the basis of the position of a calculated centroid of the pixels assigned to the body area corresponding to the class. The centroid may in this case be defined in particular as a geometric centroid. The calculated position of the centroid may in particular correspond to the position of a pixel represented by the image data, although this would not be absolutely necessary. If the reference point is determined on the basis of the position of a calculated centroid, this may be achieved in particular by averaging the positions of multiple other reference points, which in turn may in particular each be centroids of an associated body area in the image represented by the image data. By way of example, a reference point to which the image is intended to correspond to a point on the body axis of the person may be calculated by averaging the positions of two centroids that relate specifically to the corresponding body areas on the left and right half of the body, respectively (for example centroids of the left torso area and of the right torso area of the person). One advantage of using centroids of defined image areas is that these are able to be calculated efficiently and with high accuracy using known methods, which in turn may have a positive effect on the efficiency of the method and its accuracy and reliability.
In some embodiments, at least one of the selected reference points for a specific class is determined as or on the basis of the position of a pixel on a contour of the body area represented by the assigned pixels and corresponding to the class. The pixel on the contour may in particular correspond to an extreme point of the contour. By way of example, the pixel, in each case in the image of the person, may be the top of the head or the outer ends of the shoulders, or the bottom (that is to say near the legs) end of the torso of the person. This makes it possible to expand the range of available reference points and in particular also to combine them with the abovementioned centroids, in order thus to have reference points available for selection for a broader range of possible body positions (poses) of the person, each optimized with regard to the reliability and accuracy of the method.
In some embodiments, the at least one selected reference point is defined as a point that corresponds, in the image of the person represented by the image data, to one of the following points on the body of the person: (i) a top of the head; (ii) a point on each shoulder that is highest or furthest from the body axis of the person; (iii) a point of the torso nearest the top of the legs; (iv) a lap point determined on the basis of the left and right points of the torso closest to the top of the legs on the respective side with respect to the body axis; (v) a reference point on the torso ascertained on the basis of the centroid of the area of the torso lying on the corresponding half of the body to the left or right with respect to the body axis or a reference point ascertained on the basis of multiple such centroids; (vi) a point at the location of an eye or on a straight line connecting the eyes. The common feature of all of these reference points is that they are typically recognized with a high level of reliability as distinctive points within an image sensor-based image and their positions are able to be determined with corresponding accuracy.
In some of these embodiments, a sitting height of the person is determined as a distance used to determine the estimated value. For this purpose, each of the following individual distances between each two associated reference points are calculated, and these calculated distances are added together to determine a value for the sitting height: (i) distance between a point closest to the top of the legs or the lap point and a centroid, in particular geometric centroid, of the lower torso located below the lowermost costal arch of the person; (ii) distance between the centroid of the lower torso and a centroid of the upper torso located above the lowermost costal arch of the person; (iii) distance between the centroid of the upper torso and a point on the connecting line between the two points on each of the two shoulders that is highest or furthest from the body axis of the person; (iv) distance from the point on the connecting line between the two shoulders and the top of the head. One advantage of these embodiments is that reliable and relatively accurate determination of the sitting height of the person is possible even if the person was in a body position deviating from a straight, upright sitting posture, in particular in a bent body position, during the image acquisition. These embodiments may thus also be used to further increase the achievable accuracy and reliability of the method as a whole.
In some embodiments, a sitting height of the person and a shoulder width of the person are used as two of the distances used to determine the estimated value. These embodiments may be used advantageously in particular in application cases in which it should be expected that the person is seated during the image acquisition, as is typically the case for instance with a driver or passenger in a vehicle, in particular in a motor vehicle. It is possible inter alia for exactly two of the distances, in particular exclusively the sitting height and the shoulder width of the person, to be used as distances for determining the estimated value for the weight of the person. It has been found that using precisely these two special distances as part of the method regularly, that is to say for a large number of different body positions of the person, leads to a particularly reliable and precise estimation of the weight of the person.
In some embodiments, a plurality of preliminary values for the body weight are determined on the basis of various ones of the determined distances, and the at least one estimated value for the body weight is calculated by mathematically averaging the preliminary values. The mathematical averaging operation may in this case in particular be an arithmetic, a geometric or a quadratic averaging operation, in each case with or without weighting, or a median formation, or comprise same (the same also applies in each case below if an averaging operation or mean calculation is mentioned). The use of such a mathematical averaging operation based on a plurality of provisionally estimated values for the body weight of the person may be used to increase the mathematical robustness of the weight estimation method and thus in turn its reliability and accuracy. As part of an optional weighting operation, the relative influence of various input variables of the respective averaging operation on the result of the averaging operation may in particular be adjusted and optimized in a targeted manner. By way of example, in the averaging operation to determine an estimated value for the body weight, a provisional value for the body weight determined on the basis of an ascertained sitting height may thus be weighted to a greater or lesser extent than a provisional value for the body weight determined on the basis of an ascertained shoulder width.
In some embodiments, the reference data used to determine the estimated value for the body weight are selected from multiple available sets of reference data on the basis of one or more previously captured characteristics of the person. These characteristics may relate in particular to an ethnicity or region of origin, an age or a gender of the person. Since such characteristics in many cases clearly correlate with the characteristics of a frequency distribution for body weight, the reliability and accuracy of the method may thereby likewise be further increased. If for example the person is already elderly and thus comes from an age cohort with a birth year that is significantly earlier than today, it may be assumed from a modern standpoint that the body weight distribution, similar to the body height distribution for this age cohort, will shift to smaller values compared to the corresponding values of a considerably younger age cohort, because in recent decades, at least in most industrialized countries, people have become taller and heavier on average.
In some embodiments, the comparison takes place based on the at least one determined distance with the reference data using a regression method, which may in particular be a quadratic or exponential regression method, since the latter have proven to be particularly suitable methods for this comparison. Linear regression methods may also be used in principle, although the abovementioned quadratic and exponential regression methods are often even more suitable for said comparison.
In some embodiments, the image data represent the image sensor-based recording in three spatial dimensions. This may be achieved in particular by using the image data that have been captured by a 3D image sensor (3D camera). The 3D image sensor may in particular be what is known as a time-of-flight (TOF) camera. The use of such three-dimensional image data has the particular advantage over the use of 2D image data that the positions of the reference points and their distances are able to be determined directly on the basis of the image data in three-dimensional space and no loss of accuracy due to the use of only two-dimensional image data has to be accepted, or no effort has to be made to combine multiple 2D images recorded from different perspectives.
In some embodiments, the method furthermore comprises outputting a respective value for at least one of the determined distances or for a respective position of at least one of the determined reference points. In addition to the at least one estimated value for the body weight, said anthropometric information may thus also be made available, in particular including for the purpose of machine-based or automatic further processing.
In some embodiments, the method furthermore comprises controlling (in particular activating, deactivating, controlling or regulating or adjusting) at least one component of a vehicle or of another system, in particular a medical system or a body measurement system, on the basis of the output estimated value for the body weight of the person.
In particular, according to some of these embodiments, in the case of a vehicle application, the control may be performed in relation to one or more of the following vehicle components: seat (in particular with regard to sitting height, seat position, backrest adjustment, seat heating), steering device, safety belt, airbag (in particular with regard to airbag filling/target pressure), interior or exterior mirrors, air-conditioning system, communication device, infotainment system, navigation system. The respective control may in particular take place fully automatically or semi-automatically, meaning that the at least one ascertained estimated value for the body weight is used alone or in conjunction with one or more other variables or parameters for the automatic control of one or more vehicle or system components.
In some embodiments, based on the application of one or more of the confidence criteria, in particular a respective empty selection may also be made. The occurrence of such an empty selection may then be used in particular as a stop criterion for stopping or repeating or pausing the method.
A second aspect of the invention relates to a device for automatically estimating a body weight of a person, wherein the device is configured to carry out the method according to the first aspect of the invention.
A third aspect of the invention relates to a vehicle having a device according to the second aspect of the invention. The vehicle may in particular be configured to carry out the method according to the first aspect in the form of one of the embodiments mentioned above with reference to the control of vehicle components on the basis of the at least one ascertained estimated value for the body weight.
A fourth aspect of the invention relates to a computer program comprising instructions that, when the program is executed by a data processing device, prompt the latter to carry out the method as claimed in one of the preceding claims. The data processing device may in particular be provided by the device according to the second aspect of the invention or form part thereof.
The computer program may in particular be stored in a non-volatile data carrier. This is preferably a data carrier in the form of an optical data carrier or a flash memory module. This may be advantageous if the computer program as such is to be handled independently of a processor platform on which the one or more programs are to be run. In another implementation, the computer program may be present as a file on a data processing unit, in particular on a server, and may be downloaded via a data connection, for example the Internet or a dedicated data connection such as for instance a proprietary or local network. The computer program may additionally have a plurality of individual interacting program modules.
The device according to the second aspect or the vehicle according to the third aspect may accordingly have a program memory in which the computer program is stored. As an alternative, the device or the vehicle may also be configured to access a computer program available externally, for example on one or more servers or other data processing units, via a communication connection, in particular in order to exchange therewith data that are used during the course of the method or computer program or represent outputs of the computer program.
The features and advantages explained in relation to the first aspect of the invention also apply correspondingly to the further aspects of the invention.
Further advantages, features and application possibilities of the present invention emerge from the following detailed description in conjunction with the figures,
in which
In the figures, the same reference signs are used throughout for the same or corresponding elements of the invention.
A person P, who is in particular a driver of the vehicle 100 in the example that is shown, is located on a seat 140 in the vehicle 100. In order to capture the person P using image sensors, one or more image sensors are provided at one or more locations 110, 120 or 130 in or on the vehicle 100. One or more of these image sensors may in particular be 3D cameras, in particular of the time-of-flight (TOF) type, which are able to capture the person P in three spatial dimensions using image sensors and to deliver corresponding image data, which in particular represent a corresponding depth image of the person P.
In a further step 204, the 3D image data may be preprocessed, in particular filtered, in order to improve the image quality. Such filtering may in particular serve to reduce or remove image noise or image components that are not required for the rest of the method, such as for example image backgrounds or other irrelevant image components or artefacts. In order to index the individual pixels, a running index i may additionally be set in step 206.
This is then followed by a step 208, in which the preprocessed image data are subjected to a classification method in which each pixel vi is classified with respect to a predetermined body area classification 400, in which different body areas each form a class, that is to say assigned to one of these classes. One exemplary classification is illustrated in
It is then checked, in a step 210, for the respective pixel vi that has just been classified, whether the associated confidence value Ci satisfies a first confidence criterion, which in the present example is defined as a confidence threshold CT. Only if a confidence value Ci lies above this confidence threshold CT, in a step 212, the associated pixel vi is selected for use in the rest of the method, but if not, it is discarded (step 216). In both cases, a check (i=N?) then takes place as to whether there are still further pixels to be classified (steps 214 or 218). If this is the case (214/218—no), then the classification of the next pixel to the incremented index i=i+1 is continued in step 208.
Otherwise (214/218—yes), in step 220, a further index j is initialized for a number of M classes that are subsequently relevant. Differentiation into relevant and irrelevant classes makes sense in particular when the classification method, in step 208, is performed by way of a classification that makes more classes available than are specifically required to ascertain an estimated value for the weight. This may occur especially when the classes that are not required in this respect are required as part of another application that uses the same classification. Step 220 may also coincide with step 206.
In a step 222 for the current class j, that is to say the assigned body area, a reference point Rj is then calculated according to a corresponding specification and exclusively on the basis of the pixels previously selected in step 212 and assigned to this class j. The specification may in this case specify in particular that the reference point should be calculated as a centroid, in particular volume centroid or geometric centroid, of the set of pixels or pixel positions assigned to the class j in the image. As an alternative, however, the specification may also specify in particular that a specific pixel on the contour of the volume area (surface area in the case of a 2D image) defined by this set of pixels (point cloud) should be selected as reference point RJ. This may in particular be a pixel on the contour that, with regard to at least one of its image coordinates in relation to a coordinate system applied to the image, which does not necessarily have to correspond to the image grid with which the image was recorded, has an extreme value out of all of the pixels located on the contour. Various reference points are illustrated in
Furthermore, in a step 224, a confidence Dj for the calculated position of RJ is also calculated, this being able to take place in particular on the basis of averaging or forming a minimum value of the confidences Ci of the selected pixels vi used to calculate this position.
Furthermore, in a step 226, a check is performed for the respective reference point Rj that has just been determined as to whether the associated confidence value Dj satisfies a second confidence criterion, which, in the present example, is defined as a confidence threshold DT. Only if the confidence value Dj lies above this confidence threshold DT, in a step 218, the associated reference point Rj is selected for use in the rest of the method, but if not, it is discarded (step 232). In both cases, a check (i=M?) then takes place as to whether there are still further reference points to be calculated (steps 230 or 234). If this is the case (230/234—no), then the determination of the next reference point to the incremented index j=j+1 is continued in step 222. The method may also be configured such that, if the confidence value Dj of a mandatorily required reference point does not satisfy the second confidence criterion, the method is stopped and run through again at a later time on the basis of new image data. This may in particular also be the case when no pixels or reference points at all satisfy the first or second confidence criterion with their respective confidence values.
With reference now to
Finally, in a step 238, corresponding distances between the associated reference point positions are determined on the basis of the corrected reference point positions for predetermined pairs of reference points. This may in particular take place by calculating one or more distances that individually or cumulatively represent a measure of the sitting height or the shoulder width of the person.
In order to arrive at an estimated value for the body weight of the person P from the distances thus determined, these distances are used, in a step 240, as input variables for a regression analysis or another comparison with data from a database, the data of which define a relationship between different values for the determined measures (in the present example, specifically sitting height or shoulder width), on the one hand, and various body weight values corresponding thereto, on the other hand. There are large numbers of such anthropometric databases, in particular for the respective populations of different countries or regions. By way of example, the Federal Institute for Occupational Safety and Health for Germany provides such a database, in particular including on the Internet.
If, as in the present example, different measures are used as input variables for the regression or database, then a corresponding value, in particular a provisional value, for the estimated body weight G of the person may be determined for each of these variables. These various provisional values may then be combined, in particular by averaging, to form an overall estimated value for the body weight G.
Finally, in a step 242, the determined overall estimated value G may be output, in particular on a human-machine interface, or else, as illustrated in the present example, to a vehicle component of the vehicle 100 in order to control this component by way of the output value. The vehicle component may in particular be an adjustable exterior mirror, an airbag able to be configured or deactivated in different ways, or an adjustable passenger seat.
It is thereby possible to control such controllers, which in the past was not able to be achieved at all or was able to be achieved using dedicated sensors provided for this purpose, in particular scales, on the basis of image data that are in many cases already captured for other applications, meaning that dual-use or multi-use use is made possible with at least a partial saving of application-specific effort or components.
As explained above with reference to
The following are used as reference points R for this purpose: (i) a top 501 of the head, (ii) the uppermost points 502R/L of the left and right shoulders 403R/L in terms of height and the shoulder centroid 503 determined therefrom by geometric averaging, (iii) the respective centroids 504R/L of the upper right and left torso areas 407R/L and the upper torso centroid 505 determined therefrom by geometric averaging, (iv) the respective centroids 506R/L of the upper right and left torso areas 408R/L and the upper torso centroid 507 determined therefrom by geometric averaging, and (v) the respective lowermost points 508R/L of the lower right and left torso areas 408R/L and the lap point 509 determined therefrom by geometric averaging.
The shoulder width 360 may then be determined easily by calculating the distance between the two reference points 502R and 502L. The sitting height, on the other hand, is calculated cumulatively here, that is to say by individually determining multiple distances and summing them. A first of these distances is the distance 510 between the top 501 of the head and the shoulder centroid 503. A second of these distances is the distance 520 between the shoulder centroid 503 and the upper torso centroid 505. A third of these distances is the distance 530 between the upper torso centroid 505 and the lower torso centroid 507. Finally, a fourth and last of these distances is the distance 540 between the lower torso centroid 505 and the lap point 509. The sitting height that is sought is the sum of these four distances. This division of the sitting height determination on the basis of multiple individual distances has the advantage that it delivers more accurate and more reliable results, especially in the case of body positions that deviate significantly from an upright or straight posture, than a sitting height determination based on directly determining the distance between the reference points 501 and 509 by way of subtraction.
With reference to
A plausibility check may additionally also be carried out, in which the position of the shoulder centroid 503 or of the right shoulder reference point 502R is checked using a further reference point. For this purpose, in particular a distance from this reference point may be ascertained and compared with an associated reference distance for the purpose of a check. Distance ratios may also serve as a basis for a plausibility check in a similar way, in addition to or instead of pure distance values.
While at least one exemplary embodiment has been described above, it should be noted that there are a large number of variations in this respect. It should also be noted here that the described exemplary embodiments constitute only non-limiting examples and they are not thereby intended to limit the scope, applicability or configuration of the devices and methods described here. Instead, the above description will provide a person skilled in the art with an indication for the implementation of at least one exemplary embodiment, wherein it is understood that various changes in the means of functioning and the arrangement of the elements described in an exemplary embodiment may be made without in the process departing from the subject matter respectively defined in the appended claims or its legal equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10 2020 120 600.3 | Aug 2020 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/071096 | 7/28/2021 | WO |