A variety of methods have been proposed to measure head impacts. One approach uses sensors in a helmet. This approach is flawed since the helmet may rotate on the head during an impact, or even become displaced.
Another approach uses tri-axial accelerometers embedded in patches attached to the head. This approach is has limited accuracy since the position and orientation of the patches on head is not known precisely.
Yet another approach uses a combination of a tri-axial linear accelerometer and a gyroscope. This approach yields rotations and linear acceleration at the sensor location. However, when the desire is to measure the motion of a rigid body, such as a human head, it is often impossible or impractical to place a sensor at the center of the rigid body.
The accompanying figures, in which like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and to explain various principles and advantages all in accordance with the present invention.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
Before describing in detail embodiments that are in accordance with the present invention, it should be observed that the embodiments reside primarily in combinations of method steps and apparatus components related to monitoring motion of a substantially rigid body, such as a head. Accordingly, the apparatus components and method steps have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
In this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “comprises . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
It will be appreciated that embodiments of the invention described herein may include the use of one or more conventional processors and unique stored program instructions that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of monitoring head accelerations described herein. The non-processor circuits may include, but are not limited to, a radio receiver, a radio transmitter, signal drivers, clock circuits, power source circuits, and user input devices. As such, these functions may be interpreted as a method to monitor head accelerations. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. Thus, methods and means for these functions have been described herein. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
In the foregoing specification, specific embodiments of the present invention have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
The present disclosure relates to a method and apparatus for monitoring motion of a rigid body, such as a human head, relative to a first location. Linear and rotational motions are sensed by one or more sensors attached to the rigid body at locations displaced from the first location. The sensed rotation is used to compensate for the angular and centripetal acceleration components in the sensed linear motion. In one embodiment, the angular and centripetal acceleration components are estimated explicitly from the sensed rotation. In a further embodiment, the sensed rotations are used to estimate the relative orientations of two or more sensors, enabling the linear motions measured by the two sensors to be combined so as to cancel the angular and centripetal accelerations.
In one embodiment, a six degree-of-freedom sensor comprises a three-axis linear motion sensor, such as a tri-axial accelerometer that senses local linear motion, and a rotational sensor that measures three components of a rotational motion. The rotational sensor maybe, for example, a three-axis gyroscope that senses angular velocity, or a three-axis rotational accelerometer that senses the rate of change of angular velocity with time, or a three axis angular displacement sensors such as a compass, or a combination thereof. The six degree-of freedom sensor may comprise more than six sensing elements. For example, both rotational rate and rotational acceleration could be sensed (or even rotational position). These signals are not independent, since they are related through their time histories. However, having both types of sensors may avoid the need for integration or differentiation.
The processor 100 receives the sensor signals 108 and 110 and from them generates angular acceleration signals 112 and linear acceleration signals 114 in a frame of reference that does not have its origin at a sensor position and may not have its axes aligned with the axes of the sensor.
In one embodiment, which uses two sensors, the origin of the frame of reference is at a midpoint of the line A-A between the sensors 102 and 104, denoted in
In a further embodiment, which uses a single sensor, the origin may be selected to be any point whose position is known relative to the single sensor.
In the selected frame of reference, the vector of angular velocities of the substantially rigid body is denoted as ω, the angular acceleration vector is denoted as {dot over (ω)}, and the linear acceleration vector is denoted as a.
It is noted that the angular acceleration may be obtained from angular velocity by differentiation with respect to time and, conversely, the angular velocity may be obtained from the angular acceleration by integration with respect to time. These integrations or differentiations may be performed using an analog circuit, a sampled data circuit or by digital signal processing. Thus, either type of rotation sensor could be used. Alternatively, or in addition, a rotation displacement sensor, such as a magnetic field sensor, may be used. Angular velocity and angular acceleration may then be obtained by single and double differentiation, respectively.
The response s of a linear accelerometer at a position r={r1, r2, r3}T in the selected frame of reference is given by
s=S
lin
[a+(K({dot over (ω)})+K2(ω))r]=Slin[a−K(r){dot over (ω)}+P(r)γ(ω)], (1)
where a is the linear acceleration vector at the origin of the frame of reference and γ(ω) is a vector of centripetal accelerations given by
Slin is the linear sensitivity matrix for the sensor (which is dependent upon the sensor orientation), the matrix function K is defined as the skew symmetric matrix given by
the matrix P is given by
In general, for a rotational sensor, the response vector is
w=S
rot(ω,{dot over (ω)}), (5)
where Srot is the angular sensitivity matrix of the sensor. From this we can get (using integration or differentiation as required)
ω=F(w),
{dot over (ω)}=G(w) (6)
where F and G are functions that depend upon the angular sensitivity matrix Srot of the sensor.
In accordance with a first aspect of the disclosure, the linear acceleration at the origin of the frame of reference may be derived from the sensed linear and rotation motion.
Rearranging equation (1) gives
a=S
lin
−1
s+K(r){dot over (ω)}−P(r)γ(ω), (7)
and estimating the rotational components from the rotation sensor signal w gives
a=S
lin
−1
s+K(r)G(w)−P(r)γ(F(w)), (8a)
or,
a=S
lin
−1
s−[K(G(w))+K2(F(w))]r, (8b)
Thus, the linear acceleration at the origin is obtained as a combination of the linear motion s, and rotational motion w sensed at the sensor location, the combination being dependent upon the position r of the sensor relative to the origin and the linear sensitivity and orientation of the sensor through the matrix Slin. The matrix parameters K(r) and P(r) used in the combination (8a) are dependent upon the position r.
For a rigid body, the rotational acceleration at the origin is the same as the rotational acceleration at the sensor location and is given by equation (6).
It is noted that the combination defined in equations (8a) and (8b) requires knowledge of the sensitivities of the sensor and knowledge of position of the sensor relative to the origin.
In equation (7), the matrix Slin is dependent upon the orientation of the sensor relative to the frame of reference.
In one embodiment the sensor is oriented in a known way on the rigid body. This is facilitated by marking the sensor (for example with an arrow).
In a further embodiment, the sensor is shaped to facilitate consistent positioning and/orientation on the body. For example, a behind-the-ear sensor may be shaped to conform to the profile of an ear, or a nose sensor is shaped to conform to the bridge of the nose.
In a still further embodiment, a measurement of the sensor orientation relative to the direction of gravity is made and the frame of reference is fixed relative to the direction of gravity.
Generic System
In a still further embodiment, measurement of the sensor orientation relative to a reference sensor, shown as 118 in
A sensor may be attached using self-adhesive tape, for example. The sensor should be as light as possible, so that the resonance frequency of the sensor mass on the compliance of the skin is as high as possible (see, for example, ‘A Triaxial Accelerometer and Portable Data Processing Unit for the Assessment of Daily Physical Activity’, Carlijn V. C. Bouten et al., IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 3, MARCH 1997, page 145, column 2, and references therein). A self adhesive, battery powered sensor may be used, the battery being activated when the sensor is attached to the head.
The sensor 102 may be calibrated with respect to the reference sensor 118.
Single Sensor
The system 200 enables monitoring motion of a substantially rigid body relative to a first location in response to linear 108″ and rotational motion signals 108′ from a motion sensor 102 locatable on the substantially rigid body at a second location, displaced from the first location. The system comprises a processing module 202 responsive to the rotational motion signal 108′ and operable to produce a plurality of rotational components, 112 and 204. A memory 206 stores parameters dependent upon the first and second locations. A combiner 208 combines the plurality of rotational components with the linear motion signals 108″, dependent upon the parameters stored in the memory 206, to provide an estimate of the motion at the first location in the substantially rigid body. The signals 114 and/or 112, representative of the motion at the first location, are provided as outputs. The rotational components comprise first rotational components 112, dependent upon angular acceleration of the substantially rigid body and second rotational components 204 dependent upon angular velocity of the substantially rigid body.
The flow chart in
While the approach described above has the advantage of using a single sensor, one disadvantage o is that, unless a reference sensor is used, the approach requires knowledge of position of the sensor relative to the origin. However, if a reference sensor is used, the position, orientation and sensitivity may be estimated.
Two or More Sensors
In accordance with a second aspect of the present disclosure, the linear acceleration at the origin of the frame of reference may be derived from the sensed linear and rotation motion at two or more sensors. In one embodiment, two sensors are used, located on opposite sides of the desired monitoring position. For example, one sensor could be either side of a head to monitor motion relative to a location between the sensors. This approach avoids the need to know the sensor locations relative to the selected origin, and also avoids the need for differentiation or integration with respect to time, although more than one sensor is required.
To facilitate explanation, a two-sensor system is considered first. The first and second sensors are referred to as ‘left’ and ‘right’ sensors, however, it is to be understood that any pair of sensors may be used.
The origin is defined as the midpoint between the two sensors. Thus, the sensor positions are rL={r1, r2, r3}T for the left sensor and rR={−r1, −r2, −r3}T for the right sensor.
The accelerations are not necessarily the same, since, as discussed above, each measurement is in the frame of reference of the corresponding sensor. In the frame of reference of the left sensor,
S
L,lin
−1
s
L
=a+[K({dot over (ω)})+K2(ω)]rL, (9)
R
−1
S
R,lin
−1
s
R
=a+[K({dot over (ω)})+K2(ω)]rR, (10)
where R is a rotation matrix that is determined by the relative orientations of the two sensors and the sensitivity matrices are relative to the sensor's own frame of reference. R−1SR,lin−1sR is a vector of compensated and aligned right sensor signals and SL,lin−1sL is the vector of compensated left sensor signals.
Averaging (9) and (10) gives
½SL,lin−1sL+½R−1SR,lin−1sR=a+½[K({dot over (ω)})+K2(ω)](rL+rR)=a (11)
where R is a rotation matrix that is determined by the relative orientations of the two sensors and the sensitivity matrices are relative to the sensor's own frame of reference. Here we have used rL+rR=0.
This allows the linear acceleration at the origin (the midpoint) to be estimated as the simple combination
In some applications, the left and right sensors may be orientated with sufficient accuracy that the rotation matrix can be assumed to be known. In other applications, the rotation matrix R may be estimated from a number of rotation measurements (rate or acceleration). The measurements may be collected as
W
R
=RW
L, (13)
where WL and WR are signal matrices given by
W
R
=[W
R,1
W
R,2
. . . w
R,N], (14)
W
L
=[W
L,1
W
L,2
. . . w
L,N].
This equation may be solved by any of a variety of techniques known to those of ordinary skill in the art. For example, an unconstrained least squares solution is given by
R=W
R
W
L
T(WLWLT)−1. (15)
The solution may be constrained such that R is a pure rotation matrix.
Alternatively, the rotation matrix may be found from the rotational motion signals using an iterative algorithm, such as least mean square or recursive least mean square algorithm.
The relative orientation may also be obtained by comparing gravitation vectors, provided that the body is not rotating.
More generally, a weighted average of the aligned signals from two or more sensors (adjusted for orientation and sensitivity) may be use to estimate the linear acceleration at a position given by a corresponding weighted average of the sensor positions when the sum of the weights is equal to one. If a is the linear motion at the position
the weighted average of aligned signals is
where Ri is the alignment matrix for sensor i, αi are weights that sum to unity, and Si is a sensitivity matrix. The vector ri−ē denotes the position vector from the position
Equation (16) is a generalization of equation (12) and describes operation of a system for monitoring motion of a substantially rigid body relative to a first location,
of the motion at the first location in the substantially rigid body. A signal representative of the motion of the substantially rigid body relative to the first location is output or saved n a memory.
The position vector
of the position vectors of the plurality of second locations and the estimate of the motion at the first location comprises a corresponding weighted average
of the plurality of aligned motion vectors.
Measurements of the motion at the two sensors may be synchronized by means of a synchronization signal, such as a clock with an encoded synchronization pulse. The clock may be generated by the processor 100 or by one of the sensors. When identical sensors are used, a ‘handshake’ procedure may be use to establish which sensor will operate at the master and which will operate as the slave. Such procedures are well known to those of ordinary skill in the art, particularly in the field of wired and wireless communications.
The signals 112 and 116 together describe the motion of the rigid body and may be used to determine, for example, the direction and strength of an impact to the body. This has application to the monitoring of head impacts to predict brain injury.
In a still further embodiment, the alignment matrix R is found by comparing measurements of the gravity vector made at each sensor location. These measurements may be made by the linear elements of sensor or by integrated gravity sensors. In this embodiment one of the sensors does not require rotational sensing elements, although such elements may be included for convenience or to improve the accuracy of the rotation measurement.
In one embodiment the sensor is oriented in a known way on the rigid body. This is facilitated by marking the sensor (for example with an arrow).
Consistent positioning and orientation of the sensors may be facilitated by shaping or marking the sensor. For example, a behind-the-ear sensor may be shaped to conform to the profile of the back of an ear, or a nose sensor shaped to conform to the bridge of the nose.
More generally, the sensor elements are coupled to a mounting structure shaped for consistent orientation with respect to a characteristic feature of a substantially rigid body, and outputs linear and rotational motion signals. In a further embodiment, the mounting structure comprises a flexible band, such as 702 shown in
Self Calibration
In a further embodiment of the invention, the head mounted sensing system is calibrated with relative to a reference sensing system on a helmet, mouthguard or other reference structure. The position of the helmet on a head is relatively consistent. The positioning of mouthguard, such as protective mouth guard is very consistent, especially if custom molded to the wearer's teeth. While both a helmet and a mouthguard can be dislodged following an impact, they move with the head for low level linear and rotational accelerations. The calibration is not simple since there is a non-linear relationship between the sensor signals due to the presence of centripetal accelerations. The method has application to head motion monitoring, for sports players and military personnel for example, but also has other applications. For example, the relative positions and orientations of two rigid objects that are coupled together, at least for a while, may be determined from sensors on the two bodies.
Self calibration avoids the need to position and orient sensor accurately on the head and also avoids the need to calibrate the head sensors for sensitivity. This reduces the cost of the head sensors. A unique identifier may be associated with each helmet or mouthguard. This avoids the need for have a unique identifier associated with each head sensor, again reducing cost. Also, signals transmitted to a remote location (such as the edge of a sports field) are more easily associated with an individual person whose head is being monitored. That is, the helmet or mouthguard may be registered as belonging to a particular person, rather than registering each head sensor. Additionally, the helmet or mouthguard sensor may be used as a backup should the head sensor fail and may also detect such failure.
The system 700 also comprises a reference sensor 118 of a reference sensing system 702 coupled to a helmet 704. The reference sensing system 702 may also include a processor, a transmitter and a receiver. A helmet 704 is shown in
In operation, the processing module 100 operates to compute a rotation matrix R that describes the relative orientation of the head mounted sensor 102 relative to the helmet mounted sensor 118. The rotation matrix satisfies
W
H
=S
H,rot
RS
R,rot
−1
W
R, (17)
where if IL and WR are signal matrices given by
W
R=[ωR,1ωR,2 . . . ωR,N],
W
H=[ωH,1ωH,2 . . . ωH,N], (18)
The subscript ‘R’ denotes the reference sensor and the subscript ‘H’ denotes the head mounted sensor. Since the inverse sensitivity matrix SR,rot−1 of the reference sensor is known, equation (17) may be solved in the processing module for the matrix product SH,rotR, the inverse of which is used to compute rotations relative to the frame of reference of the reference sensor. The matrix product may be estimated when the reference structure is first coupled to the head, or it may be continuously updated during operation whenever the rotations are below a threshold. Higher level rotations are not used, since they may cause the helmet to rotate relative to the head.
When a linear reference sensor is used, the gravitation vectors measured by the reference and head mounted sensors may be used to estimate the rotation matrix. The rotation matrix satisfies
G
H
=S
H,lin
RS
R,lin
−1
G
R, (19)
where GL and GR are matrices of gravity vectors given by
G
R
=[g
R,1
g
R,2
. . . g
R,N],
G
H
=[g
H,1
g
H,2
. . . g
H,N]. (20)
The gravity vectors are measured during periods where the head is stationary. Equation (19) may be solved for the matrix product SH,linR.
The acceleration at the head mounted sensor may be written as
s
H
=S
H,lin
R[a−K(rRH){dot over (ω)}+P(rRH)γ(ω)], (22)
where a is the acceleration vector at the reference sensor. Since the rotation vectors are known (from the head mounted sensor and/or the reference sensor) equation (22) may be solved in the processing module to estimate the position vector rRH of the head mounted sensor relative to the reference sensor. Additionally, if the position center of the head is known relative to the reference sensor on the helmet, the position of the head mounted sensor may be found relative to center of the head.
The orientation can be found from the rotational components. If the linear and rotation sensing elements are in a known alignment with one another, the orientation of the linear sensing elements can also be found. Once the orientation is known, either predetermined or measured, the sensitivity and positions of the linear elements can be found. The output from a sensing element is related to the rigid body motion {a, {dot over (ω)}, ω} by
s
i
=g
i
−1ηiT[a+K({dot over (ω)})ri+K2(ω)ri], (23)
where ηiT is the orientation and gi−1 is the sensitivity. In matrix format, the relationship may be written as
An ensemble averaging over a number of sample points provides as estimate of the inverse sensitivity of the sensing element and the position of the sensing element as
where the matrix A is given by
A=[s
i−ηiT{K({dot over (ω)})+K2(ω)}]. (26)
Thus, the position and sensitivity of the sensing element may be determined from the sensor output s, and the measured rotation, once the orientation is known.
The sensor orientation may be determined (a) by assumption (b) from gravity measurements (c) from rotation measurement and/or (d) from rigid body motion measurements, for example. Once the orientation is known, the sensitivity and position may be determined from equations (25) and (26) above.
If several sensing elements are positioned at the same location, their positions may be estimated jointly. Equation (24) can be modified as
or, in matrix form,
Once calibrated, the acceleration at the origin (the center of the head for example) may be found using
a=[S
H,lin
R]
−1
s
H
+K(rH){dot over (ω)}−P(rH)γ(ω), (29)
where rH is the position of the head mounted sensor relative to the origin. This computation uses the inverse of the matrix product SH,linR, so separation of the two matrices, while possible, is not required.
Thus, a reference sensor on the mounted on a reference structure, such as a helmet or mouthguard, may be used to determine the orientation and position of the head mounted sensor, together with its sensitivity. This is important for practical applications, such as monitoring head impacts during sports games or for military personnel, where accurate positioning of a head mounted sensor is impractical, and calibration of the head mounted sensors may be expensive.
The helmet 704 may support one or more visual indicators such as light emitting diodes (LEDs) 706 or different colors. These indicators may be used to show the system state. States could include, for example, ‘power on’, ‘head sensors found’, ‘calibrating’, ‘calibration complete’ and ‘impact detected’. In one embodiment, an impact above a threshold is indicated by a flashing red light, with the level of the impact indicated by the speed of flashing.
In one embodiment, the head motion is only calculated or output when motion is above a threshold level and calibration is only performed when the motion is below a threshold.
In the foregoing specification, specific embodiments of the present invention have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of the invention.
This application claims priority from Provisional Application Ser. No. 61/519,354, filed May 20, 2011, titled “Method and Apparatus for Monitoring Rigid Body Motion in a Selected Frame of Reference”, which is hereby incorporated herein.
Number | Date | Country | |
---|---|---|---|
61519354 | May 2011 | US |