Aerospace Vehicle Navigation and Control System Comprising Terrestrial Illumination Matching Module for Determining Aerospace Vehicle Position and Attitude

Information

  • Patent Application
  • 20210206519
  • Publication Number
    20210206519
  • Date Filed
    December 16, 2020
    4 years ago
  • Date Published
    July 08, 2021
    3 years ago
Abstract
The present invention relates to an aerospace vehicle navigation and control system comprising a terrestrial illumination matching module for determining spacecraft position and attitude. The method permits aerospace vehicle position and attitude determinations using terrestrial lights using an Earth-pointing camera without the need of a dedicated sensor to track stars, the sun, or the horizon. Thus, a module for making such determinations can easily and inexpensively be made onboard an aerospace vehicle if an Earth-pointing sensor, such as a camera, is present.
Description
RIGHTS OF THE GOVERNMENT

The invention described herein may be manufactured and used by or for the Government of the United States for all governmental purposes without the payment of any royalty.


FIELD OF THE INVENTION

The present invention relates to an aerospace vehicle navigation and control system comprising a terrestrial illumination matching module for determining aerospace vehicle position and attitude.


BACKGROUND OF THE INVENTION

Aerospace vehicle's need to periodically determine their position with respect to the Earth so they can stay on their flight path or orbit. In order to determine such position, an aerospace vehicle must be able to determine its pose estimation between two points in time and/or attitude with respect to the Earth. Currently, aerospace vehicles such as satellites primarily use star trackers to determine their orbital position. The manner in which other aerospace vehicles determine their position with respect to the earth is found in Table 1. All of the current methods of determining an aerospace vehicle's position with respect to the earth cannot determine pose estimation between two points in time and/or attitude through one sensor and such sensors must be dedicated sensors. As dedicated sensors are required, the aerospace vehicle requires additional mission specific sensors that add undesired bulk and weight. Furthermore, the further away from the earth that the aerospace is positioned, the more difficult the task of determining such aerospace's position with respect to the earth becomes.


Applicants recognized that the source of the problems associated with current methods lie in the use of complex inputs such as starlight that require that sensors which are positioned such that they cannot focus on the earth. In view of such recognition, Applicants discovered that earth light, such the light emitted by cities, could substitute for starlight. Thus, sensors that are mission specific that are focused on the earth could be dual use sensors that acquire mission specific information and information for determining an aerospace vehicle's position. As a result, the bulk and weight of an aerospace vehicle can be significantly reduced. In addition, to the aforementioned benefits, Applicants aerospace vehicle navigation and control system can serve as a backup navigation and control system for any conventional system thus obviating the need for other backup navigation systems that would inherently introduce unwanted bulk and weight to the subject aerospace vehicle.


SUMMARY OF THE INVENTION

The present invention relates to an aerospace vehicle navigation and control system comprising a terrestrial illumination matching module for determining spacecraft position and attitude. The method permits aerospace vehicle position and attitude determinations using terrestrial lights using an Earth-pointing camera without the need of a dedicated sensor to track stars, the sun, or the horizon. Thus, a module for making such determinations can easily and inexpensively be made onboard an aerospace vehicle if an Earth-pointing, sensor such as a camera is present.


Additional objects, advantages, and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present invention and, together with a general description of the invention given above, and the detailed description of the embodiments given below, serve to explain the principles of the present invention.



FIG. 1 presents the first mode of operation for the module, which gives the aerospace vehicle's pose following the processing of terrestrial light images.



FIG. 2A presents the second mode of operation for the module, which gives the aerospace vehicle's attitude following the processing of terrestrial light images.



FIG. 2B is the hardware system schematic and block diagram comprising the terrestrial illumination matching (TIM) module, a camera and/or other sensor which generates terrestrial light images, an onboard central processing unit (CPU), and vehicle actuators.





It should be understood that the appended drawings are not necessarily to scale, presenting a somewhat simplified representation of various features illustrative of the basic principles of the invention. The specific design features of the sequence of operations as disclosed herein, including, for example, specific dimensions, orientations, locations, and shapes of various illustrated components, will be determined in part by the particular intended application and use environment. Certain features of the illustrated embodiments have been enlarged or distorted relative to others to facilitate visualization and clear understanding. In particular, thin features may be thickened, for example, for clarity or illustration.


DETAILED DESCRIPTION OF THE INVENTION
Definitions

Unless specifically stated otherwise, as used herein, the terms “a”, “an” and “the” mean “at least one”.


As used herein, the terms “include”, “includes” and “including” are meant to be non-limiting.


It should be understood that every maximum numerical limitation given throughout this specification includes every lower numerical limitation, as if such lower numerical limitations were expressly written herein. Every minimum numerical limitation given throughout this specification will include every higher numerical limitation, as if such higher numerical limitations were expressly written herein. Every numerical range given throughout this specification will include every narrower numerical range that falls within such broader numerical range, as if such narrower numerical ranges were all expressly written herein.


Evening Civil Twilight is the period that begins at sunset and ends in the evening when the center of the sun's disk is six degrees below the horizon. Morning Civil Twilight begins prior to sunrise when the center of the sun's disk is six degrees below the horizon, and ends at sunrise. The most recent version of the Air Almanac as published by the U.S. government should be used to as the source of the sun's position on the day in question.


Nomenclature, Subscripts and Background

a=semi-major radius


b=semi-minor radius


C=attitude matrix


e=eccentricity


ê=unit vector of s


E=Essential Matrix

e2=square of first eccentricity


f=flattening of the planet


ƒ=process nonlinear vector function

custom-character=matrix of linearized dynamics


F=Fundamental matrix


fc=focal length of camera


g=gravitational acceleration


H=Jacobian of measurement sensitivity


h=observation function


hE=height of vehicle above Earth's surface


I=identity matrix


JD=Julian Day

K=Kalman gain


k=equatorial gravity constant


N=radius of curvature of vertical prime


n=integer number


P=covariance of state


Q=process noise covariance matrix


q0=equatorial gravity

custom-character=measurement noise covariance matrix


R=rotation matrix between coordinate frames


r=vehicle position


s=measurement from camera to point on earth


T=pose matrix


t=translation


t=time


u=input vector


v=vehicle velocity


w=measurement noise


W=weights


x=state vector


X=image coordinate


y=measurement


Y=expected measurement


β=reduced latitude


Γ=residuals of observations


Δ=change or nutation


ε=obliquity of the ecliptic


θ=Greenwich Apparent Sidereal Time

κ=filter tuning value


μ=gravitational parameter


σ=accuracy of sensor


ν=residual


φ=latitude, deg


χ=set of sigma points


ψ=longitude


ω=angular rate


Super/Subscripts

−=state a priori, but after propagation


+=state a posteriori


0=initial state


c=integer number


CAM=camera frame


E=conditions for the Earth


ECEF=measured with respect to a rotating frame


ECI=measured with respect to an inertial frame


gc=geocentric


gd=geodetic


I=inertial


i=integer index


k=timestep


m=integer number


m=mean


n=integer number


n=normalized


xx=predicted mean covariance


xy=predicted cross covariance


yy=predicted observed covariance


Kalman Filters

Kalman Filter (KF), an Extended Kalman Filter (EKF) and/or an Unscented Kalman Filter (UKF) are used to propagate a state vector and update with a measurement. Typically, the accuracy of said aerospace vehicle's propagated position and/or attitude is best if an Unscented Kalman Filter is used, with the next best filter being an Extended Kalman Filter and/or a Kalman Filter. Suitable of applying such filters are presented below.


Kalman Filter: Given the system dynamics along with xk, Pk, yk, Qk, and Rk the Kalman Gain is calculated as follows:






K
k
=P
k

H
T(HPkHT+R)−1


The current state estimate is






{circumflex over (x)}
k
+
={circumflex over (x)}
k

+K
k(yk−H{circumflex over (x)}k)


and the current covariance






P
k
+
=P
k

−K
k
HP
k



The system is then propagated from k to k+1 with






{circumflex over (x)}

k+1

=f({circumflex over (x)}k+,t)







F


(
t
)


=





f


(

x
,
t

)





x




|

x
=

x
.









and






P
k+1

=
custom-character
P
k
+
custom-character
hu T+Q
k


Extended Kalman Filter: The position estimator will rely on the use of an EKF based on its use in contemporary literature, as well as the specific application using the camera frame. Attempting to write the process in sequential order for an EKF, it is formulated by first defining the system dynamics as






{dot over (x)}=f(x,t)


and the covariance matrix, P, is able to be propagated by






{dot over (P)}=
custom-character
P+P
custom-character
T
+Q





where







F


(
t
)


=





f


(

x
,
t

)





x




|

(

x
=

x
*


)







and Q is the process noise. It is evaluated at the expected values of the state dynamics. The measurement y, is related to the state vector by the relation






y=h(x,t)


Letting H be the Jacobian of this relationship, also known as the measurement sensitivity matrix,






H
=




h



x




|

(

x
=

x
*


)







This is found using the Line of Sight measurements by using the Inertial to Camera frame, and the dot product of the unit vectors in the inertial frame







H
k

=


R
CAM
ECI




1



s
k






[


{




e
^

k
ECI




e
^

k

ECI
T



-

I

3
×
3



}



0

3
×
3



]







RCAMI is found by assuming the attitude at the time the image was taken is known. At this point the measurement noise covariance is needed and determined as






custom-character2(I3×3ēkCAMekCAMT)


For the Kalman Gain and using the measurement, determine the residual to be






v
k
=y
k
−h(xk,tk)


The Kalman gain for the system can be calculated as






K
k
=P
k

H
k
T(HkPkHkT+custom-characterk)−1


where P is the expected mean squared error and custom-character is the known covariance.


The current state estimate can be updated using this gain and the residual






{circumflex over (x)}
k
+
={circumflex over (x)}
k

K
k(yk−Hk{circumflex over (x)}k)


The current covariance must also be updated as






P
k
+
=P
k

−H
k
H
k
P
k



Unscented Kalman Filter: An Unscented Kalman filter is the preferred type of filter to use for nonlinear discrete time systems. The equations for an Unscented Kalman Filter are






x
k+1
=f(xk,uk,vk,k),yk=h(xk,uk,k)+wk


A set of points (sigma points, χ) are deterministically selected such that their mean and covariance match that of the state. There are 2n+1 sigma points and associated weights; points are chosen in the classic UKF to match first two moments.











χ
m

=

x
_


,





W
0

=

κ

n
+
κ










χ
i
m

=


x
_

+


(



(

n
+
κ

)



P
xx



)

i



,





W
i

=

1

2


(

n
+
κ

)











χ

i
+
n

m

=


x
_

-


(



(

)



P
xx



)

i



,





W

i
+
n


=

1

2


(

n
k

)










Each sigma point is propagated through





χk|k−1(i)=fk−1(i))


which are averaged to find a predicted mean, used to compute a covariance







P

xx
,
k

-

=





i
=
0


2

n






W
i
c



(


χ

k
|

k
-
1


i

-


x

^
-


k
-


)





(


χ

k
|

k
-
1


i

-


x

^
-


k
-


)

T



+

Q

k
-
1







Now consider residual information, and transform the sigma points to observations






r
k|k−1
(i)
=hk|k−1(i))


Average the transformed sigma points to determine an expected measurement






=




i
=
0


2

n





W
i
m



r

k
|

k
-
1



(
i
)








Having an expected measurement, the predicted observation covariance can be determined







P

yy
,
k


=





i
=
0


2

n






W
i
c



(


Γ

k
|

k
-
1


i

-


Y
^

k
_


)





(


Γ

k
|

k
-
1


i

-


Y
^

k
_


)

T



+

R
k






as well as the predicted cross covariance







P

xy
,
k


=




i
=
0


2

n






W
i
c



(


χ

k
|

k
-
1



(
i
)


-


x

^
-


k
_


)





(


Γ

k
|

k
-
1



(
i
)


-


Y
^

k
_


)

T







Finally, the standard Kalman filter update can be applied






v
k
=Y−Ŷ
k






K
k
=P
xy
P
yy
−1







{circumflex over (x)}

k
+={circumflex over (x)}k+KkKkv






P
xx,k
+
=P
xx,k

−K
k
P
yy
K
k
T









TABLE 1







Aerospace Vehicle Type and Modes of


Guidance, Navigation, and Control









Vehicle
GNC Methods
Maneuver Method










AIR









Weather Balloon
radiosonde, theodolite
pressure inside balloon


Manned aircraft
altimeter, inertial
thrust, flight control



navigation system (INS),
surfaces



Global Positioning



System (GPS)


Unmanned aircraft
altimeter, INS, GPS
thrust, flight control




surfaces


Quadcopter
visual sensor, GPS
propeller(s)


Airborne Missile
altimeter, INS, GPS
thrust, flight control




surfaces







AEROSPACE









Scientific Balloon
star camera, altimeter
pressure inside balloon


Sounding Rocket
ring laser gyro,
thrust, flight control



altimeter, accelerometers
surfaces


Space Shuttle
human-in-the-loop, star
thrust, flight control



camera
surfaces


Launch Vehicle
INS, ring laser gyro,
thrust, flight control


(Rocket)
altimeter, accelerometers
surfaces


Ballistic Missile
INS, GPS
thrust, flight control




surfaces







SPACE









Satellite
star camera, sun sensor,
thruster, electric



horizon sensor, GPS
propulsion,




magnetorquer,




momentum wheel


Space Station
human, star camera, sun
thruster, electric



sensor, horizon sensor,
propulsion,



GPS
magnetorquer,




momentum wheel


Interplanetary
star camera, sun sensor
thruster, electric


Vehicle

propulsion,




momentum wheel





Examples of Flight Control Surfaces: Fins, Ailerons, Elevators.


Thrust includes the two-directional thrust force, as well as any gimbaled thrust vectoring the vehicle is capable of generating.






Method of Determining an Aerospace Vehicle's Position

Applicants disclose a method of determining an aerospace vehicle's position with respect to the Earth, determining the aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth wherein:

    • a) determining an aerospace vehicle's position with respect to the Earth comprises:
      • (i) having an aerospace vehicle acquire, at a time from Evening Civil
      • Twilight to Morning Civil Twilight, an image of the Earth comprising at least one terrestrial light feature;
      • (ii) matching said least one terrestrial light feature of the image with at least one feature of a terrestrial light data base;
      • (iii) weighting said matched images;
      • (iv) optionally, calculating the aerospace vehicle's propagated position and checking the result of said propagated position against the weighting;
      • (v) using the time and altitude that said image was taken to convert said weighted match into inertial coordinates;
      • (vi) optionally updating said aerospace vehicle's propagated position by using the inertial coordinates in a propagation position and/or attitude calculation; and/or
    • b) determining the aerospace vehicle's pose estimation between two points in time comprising:
      • (i) having an aerospace vehicle acquire, at a time from Evening Civil Twilight to Morning Civil Twilight, at least two images of the Earth at different times, each of said images containing at least one common terrestrial light feature;
      • (ii) comparing said two images to find at least one common terrestrial light feature;
      • (iii) calculating the pose as follows:
        • converting the image's camera coordinates to normalized coordinates;
        • calculating an essential matrix from the normalized coordinates and then recovering the pose from the essential matrix; or
        • converting the image's camera coordinates to normalized coordinates;
        • converting the normalized coordinates to pixel coordinates;
        • calculating a fundamental matrix from the pixel coordinates and then recovering the pose;
      • (iv) combining a known absolute position and attitude of the aerospace vehicle with the recovered pose to yield an updated attitude and an estimated position for the aerospace vehicle.


Applicants disclose a method of determining an aerospace vehicle's position with respect to the Earth, determining the aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth wherein:

    • a) determining an aerospace vehicle's position with respect to the Earth comprises:
      • (i) having an aerospace vehicle acquire, at a known general altitude and at time from Evening Civil Twilight to Morning Civil Twilight, an image of the Earth comprising at least one terrestrial light feature;
      • (ii) matching, using Lowe's ratio test, said least one terrestrial light feature of the image with at least one feature of a terrestrial light data base;
      • (iii) weighting, to a scale of one, said matched images;
      • (iv) optionally, calculating the aerospace vehicle's propagated position, using a Kalman Filter, an Extended Kalman Filter and/or an Unscented Kalman Filter, (typically the accuracy of said aerospace vehicle's propagated position and/or attitude is best if an Unscented Kalman Filter is used, with the next best filter being an Extended Kalman Filter) and checking the result of said propagated position against the weighting;
      • (v) using the time and altitude that said image was taken at to convert said weighted match into inertial coordinates by transforming a state vector containing position and velocity from Earth-Centered-Earth-Fixed (ECEF) coordinates to Earth-Centered-Inertial (ECI) coordinates using the following equations:







r
ECI

=

Rr
ECEF








v
ECI

=


Rv
ECEF

+


R
.



r
ECEF









R
=

[





-
sin






θ





-
cos






θ



0





cos





θ





-
sin






θ



0




0


0


0



]








R
.

=


ω
E


R











        • where θ represents the Greenwich Apparent Sidereal Time, measured in degrees and computed as follows:












θ=[θm+Δψ cos(εm+Δε)]·mod(360°

        • where the Greenwich mean sidereal time is calculated as follows:





θm=100.46061837+(36000.770053608)t+(0.000387933)t2−(1/38710000)t3

        • where t represents the Terrestrial Time, expressed in 24-hour periods and the Julian Day (JD):






t
=


JD
-

2000





January






01
d



12
h



36525











        • wherein the mean obliquity of the ecliptic is determined from:












εm=23°26′21.″448−(46.″8150)t−(0.0″00059)t2+(0.″001813)t3

        • wherein the nutations in obliquity and longitude involve the following three trigonometric arguments:
          • L=280.4665+(36000.7698)t
          • L′=218.3165+(481267.8813)t
          • Ω=125.04452−(1934.136261)t
        • and, the nutations are calculated using the following equations:





Δψ=−17.20 sin Ω−1.32 sin(2L)−0.23 sin(2L′)+0.21 sin(2Ω)





Δε=9.20 cos Ω+0.57 cos(2L)+0.10 cos(2L′)−0.09 cos(2Ω)

        • then using, the equations for the position, r, and velocity, v, in the ECI frame to calculate the position and velocity in the ECEF frame using the dimensions of the earth, preferably the following dimensions for the Earth are used:
          • a=6378137 m
          • b=6356752.3142 m
          • q0=9.7803267714 m/s2
          • k=0.00193185138639
          • e2=0.00669437999013
        • when longitude is calculated from the ECEF position by:






Ψ
=

arctan


[


r
y
ECEF


r
x
ECEF


]












        • The geodetic latitude, φgd, is calculated using Bowring's method:














β
_

=

arctan


[


r
z
ECEF



(

1
-
f

)


s


]









ϕ
gd

=

arctan


[



r
z
ECEF

+




e
2



(

1
-
f

)



(

1
-

e
2


)



R






sin
3


β



s
-


e
2


R






cos
3


β



]












        • finally the geocentric latitude is calculated from the geodetic,














tan






ϕ

g

c



=



(




a
e



(

1
-

e
2


)




1
-


e
2







sin
2







ϕ

g

d






+

h

g

d



)


(



a
e



1
-


e
2







sin
2







ϕ

g

d






+

h

g

d



)



tan






ϕ

g

d













        • where f is the flattening of the planet; e2 is the square of the first eccentricity, or e2=1−(1−f)2; and s=(rxECEF+ryECEF)1/2. such calculation is iterated at least two times, preferably at least three times to provide a converged solution, known as the reduced latitude, that is calculated by:













β
=

arctan
[



(

1
-
f

)






sin





ϕ


cos





ϕ


]











        • wherein the altitude, hE, above Earth's surface is calculated with the following equation:













h
E=(s·cos φrzECEF+e2N sin φ)sin φ−N

        • wherein the radius of curvature in the vertical prime, N, is found with







N
_

=

R


[

1
-


e
2







sin
2






ϕ


]


1
/
2











      • (vi) optionally updating said aerospace vehicle's propagated position by using the inertial coordinates in a propagation position and/or attitude calculation wherein said calculation uses a Kalman Filter, an Extended Kalman Filter and/or an Unscented Kalman Filter (typically the accuracy of said aerospace vehicle's propagated position and/or attitude is best if an Unscented Kalman Filter is used, with the next best filter being an Extended Kalman Filter and/or a Kalman Filter.



    • b) determining the aerospace vehicle's pose estimation between two points in time comprising:
      • (i) having an aerospace vehicle acquire, at a time from Evening Civil Twilight to Morning Civil Twilight, at least two images of the Earth at different times, each of said images containing at least one common terrestrial light feature;
      • (ii) comparing said two images to find at least one common terrestrial light feature;
      • (iii) calculating the pose by first converting the image's camera coordinates to normalized coordinates using the following equations and method wherein the camera's reference frame is defined with a first axis aligned with the central longitudinal axis of the camera, a second axis, that is a translation of said first axis and a normalization of said camera's reference frame, a third axis that is a rotation and translation of said second axis to the top left corner of said image with the x-axis aligned with the local horizontal direction and the y-axis points down the side of the image from this top left corner and wherein said rotation is aided by the camera's calibration matrix, containing the focal lengths of the optical sensor, which map to pixel lengths,










X
c

=

[




x
CAM






y
CAM






z
CAM




]








X
n

=


[




x
n






y
n




]

=

[





x
CAM

/

z
CAM








y
CAM

/

z
CAM





]









X
p

=


[




x
p






y
p




]

=



[




f
c



0




0



f
c




]

[




x
n






y
n




]

+

[





n
x

/
2







n
y

/
2




]











      • (iv) calculating the essential matrix from the normalized coordinates and then recovering the pose from the essential matrix using the following equations, wherein the equation for the epipolar constrain is defined as follows:












x

n

1

T(t×Rxn0)=0

        • and said equation for the epipolar constraint is rewritten as the following linear equation:







x

n

1

T[tx]Rxn0=0

        • where








[
t
]

x

=

[



0



-

t
z





t
y






t
z



0



-

t
x







-

t
y





t
x



0



]











        • [tx] is saying the translation vector should be skewed (showing an operation) and [t]x is showing post-operation the skewed vector into a matrix.

        • the matrix [t]x is redefined using the Essential Matrix, E:














x

n

1

T
Ex
n

0
=0





where






E=R[t]x

        • and the Essential Matrix is scaled or unscaled. If scaled, then the scale is known from the two images, and reflects six degrees of freedom.
        • wherein other constraints on the Essential Matrix are the following:





det(E)=0





2EETE−tr(EET)E=0

        • or, when the epipolar constraint is applied to pixel coordinates, then the Fundamental Matrix, F, is used:







x

p

1

T
Fx
p

0
=0

        • said equation is then solved for the Fundamental Matrix and the pose is recovered from the Essential and/or Fundamental Matrices wherein said pose is defined as:






T=[R|t]

      • (iv) combining the known absolute position and attitude of the aerospace vehicle with the recovered pose to yield an updated attitude and estimated position for the aerospace vehicle wherein said combining step is achieved by using the following equations wherein the attitude, C1 at the second image is defined by






C
1
=C
0
+R








        • wherein C0 is preferably defined as zero if it is not previously known; and inertial position corresponding to the second image is found by adding the scaled change in position, t, to the previous inertial position:













r
1
=r
0
+t.


Module and Aerospace Vehicle Comprising Same

Applicants disclose a module comprising a central processing unit programmed to determine an aerospace vehicle's position with respect to the Earth, an aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth according to the method of Paragraph 0020.


Applicants disclose a module comprising a central processing unit programmed to determine an aerospace vehicle's position with respect to the Earth, an aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth according to the method of Paragraph 0021.


Applicants disclose the module of Paragraphs 0022 through 0023, said module comprising an input/output controller, a random access memory unit, and a hard drive memory unit, said input/output controller being configured to receive a first digital signal, preferably said first digital signal comprises data from a sensor, more preferably said first digital signal comprises digitized imagery, and transmit a second digital signal comprising the updated aerospace vehicle's position and/or attitude, to said central processing unit.


Applicants disclose an aerospace vehicle comprising:

    • a) a module according to any of Paragraphs 0022 through 0024;
    • b) a sensor pointed towards the earth, preferably said sensor comprises a camera;
    • c) an internal and/or external power source for powering said aerospace vehicle
    • d) an onboard central processing unit; and
    • e) a means to maneuver said aerospace vehicle, preferably said means to maneuver said aerospace vehicle is selected from the group consisting of a flight control surface, propeller, thruster, electric propulsion, magnetorquer, momentum wheel, more preferably said means to maneuver said aerospace vehicle is selected from the group consisting of thruster, electric propulsion, magnetorquer, momentum wheel.


When Applicants method is employed the position of the aerospace vehicle is supplied to the vehicle's guidance system and/or to one or more individuals who are guiding the aerospace vehicle so the aerospace vehicle may be guided in the manner desired to achieve the mission of said aerospace vehicle.


EXAMPLES

The following examples illustrate particular properties and advantages of some of the embodiments of the present invention. Furthermore, these are examples of reduction to practice of the present invention and confirmation that the principles described in the present invention are therefore valid but should not be construed as in any way limiting the scope of the invention.


The aerospace vehicle pose and attitude determination method is implemented for the Suomi NPP spacecraft in low Earth, sun-synchronous near-polar orbit at an altitude of approximately 825 km. The spacecraft features an Earth-pointing camera with a day/night light collection band of 0.7 microns in the visible spectrum, and a ground field-of-view of 1500 km. At 150-second time steps, and exactly over the Great Lakes and U.S. Midwest region, the Earth-pointing camera takes nighttime terrestrial images. Using the pose determination method, the images are compared with a known terrestrial lights database which is used as the “truth” dataset. Following comparison, the module computes an inertial orbital position vector and inertial orbital velocity vector for the spacecraft. Using the attitude determination method, the images are compared with the same “truth” terrestrial lights database. Following comparison, the module computes the spacecraft's change of attitude in terms of roll, pitch, and yaw, respectively.


The aerospace vehicle pose determination method is implemented for a commercial aviation flight across the United States from Cincinnati, Ohio, to Pensacola, Fla. This route contains many major cities in its field of view during flight, such as Louisville, Ky.; Nashville, Tenn.; and Atlanta, Ga. The aircraft will be flying at 10,050 meters at 290 meters per second during nighttime, with a camera in the visible spectrum. Taking images every 15 minutes (900 seconds), an accurate position can be found to verify the plane is still on the pre-determined flight plan in the case of GPS failure by comparing the images with a known terrestrial lights database.


The aerospace vehicle pose and attitude determination method is implemented for a scientific balloon flight carrying a payload and a ground-pointing camera in the visible spectrum launching out of Fort Sumner, N. Mex., during a calm day. The balloon rises to a height of 31,500 meters over a period of two hours. The balloon will stay at altitude, fluctuating slightly during civil twilight, and descend over a period of minutes, when the balloon is separated from the payload and falls back to Earth with a parachute deployment. The pose determination method will be able to function during ascent and account for the changing altitude, since the resolution of the camera will stay constant, and still be able to determine a very accurate position measurement. At the desired operating altitude, tracking position is essential so the balloon payload is not released over a populated area, which could cause harm to the population. The balloon would also be able to track attitude, which is essential for the pointing of the instrument, such as a telescope or sensor. For both pose and attitude determination, images are taken of terrestrial lights and compared with a known terrestrial lights database.


The aerospace vehicle pose and attitude determination method is implemented for a re-entry vehicle with a ground-pointing camera in the visible spectrum returning from Low Earth Orbit at 200,000 meters to the ground, usually a water landing at zero meters. As with the balloon, this method is useful at determining the rapidly changing altitude the vehicle will experience, taking images right before and directly after the re-entry event, upon entering the sensible atmosphere. Terrestrial light matching with a known terrestrial lights database would act as a back-up or an alternative to the star tracker, GPS, and INS, that many space and air vehicles use for position and attitude, without having to switch modes. The terrestrial light matching module would not function during the re-entry event itself due to high temperatures and light experienced by the vehicle entering the atmosphere at high speeds.


While the present invention has been illustrated by a description of one or more embodiments thereof and while these embodiments have been described in considerable detail, they are not intended to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departing from the scope of the general inventive concept.

Claims
  • 1. A method of determining an aerospace vehicle's position with respect to the Earth, comprising determining the aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth wherein: a) determining the aerospace vehicle's position with respect to the Earth comprises: (i) having the aerospace vehicle acquire, at a time from Evening Civil Twilight to Morning Civil Twilight, an image of the Earth comprising at least one terrestrial light feature;(ii) matching said least one terrestrial light feature of the image with at least one feature of a terrestrial light data base;(iii) weighting said matched images;(iv) optionally, calculating the aerospace vehicle's propagated position and checking the result of said propagated position against the weighting;(v) using the time and altitude that said image was taken to convert said weighted match into inertial coordinates;(vi) optionally updating said aerospace vehicle's propagated position by using the inertial coordinates in a propagation position and/or attitude calculation; and/orb) determining the aerospace vehicle's pose estimation between two points in time comprising: (i) having the aerospace vehicle acquire, at a time from Evening Civil Twilight to Morning Civil Twilight, at least two images of the Earth at different times, each of said images containing at least one common terrestrial light feature;(ii) comparing said two images to find at least one common terrestrial light feature;(iii) calculating the pose as follows: converting the image's camera coordinates to normalized coordinates;calculating the essential matrix from the normalized coordinates and then recovering the pose from the essential matrix; orconverting the image's camera coordinates to normalized coordinates;converting the normalized coordinates to pixel coordinates;calculating the fundamental matrix from the pixel coordinates and then recovering the pose;(iv) combining the known absolute position and attitude of the aerospace vehicle with the recovered pose to yield an updated attitude and estimated position for the aerospace vehicle.
  • 2. A method of determining an aerospace vehicle's position with respect to the Earth, determining the aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth wherein: a) determining the aerospace vehicle's position with respect to the Earth comprises: (i) having an aerospace vehicle acquire, at a known general altitude and at time from Evening Civil Twilight to Morning Civil Twilight, an image of the Earth comprising at least one terrestrial light feature;(ii) matching, using Lowe's ratio test, said least one terrestrial light feature of the image with at least one feature of a terrestrial light data base;(iii) weighting, to a scale of one, said matched images;(iv) optionally, calculating the aerospace vehicle's propagated position, using at least one of a Kalman Filter, an Extended Kalman Filter and an Unscented Kalman Filter, and checking the result of said propagated position against the weighting;(v) using the time and altitude that said image was taken at to convert said weighted match into inertial coordinates by transforming a state vector containing position and velocity from Earth-Centered-Earth-Fixed (ECEF) coordinates to Earth-Centered-Inertial (ECI) coordinates using the following equations:
  • 3. A module comprising a central processing unit programmed to determine an aerospace vehicle's position with respect to the Earth, an aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth according to the method of claim 1.
  • 4. A module comprising a central processing unit programmed to determine an aerospace vehicle's position with respect to the Earth, an aerospace vehicle's pose estimation between two points in time and/or attitude with respect to the Earth according to the method of claim 2.
  • 5. The module of claim 3 said module comprising an input/output controller, a random access memory unit, and a hard drive memory unit, said input/output controller being configured to receive a first digital signal, and transmit a second digital signal comprising the updated aerospace vehicle's position and/or attitude, to said central processing unit.
  • 6. The module of claim 5 wherein said first digital signal comprises data from a sensor.
  • 7. The module of claim 6 wherein said first digital signal comprises said first digital signal comprises digitized imagery.
  • 8. The module of claim 4 said module comprising an input/output controller, a random access memory unit, and a hard drive memory unit, said input/output controller being configured to receive a first digital signal, and transmit a second digital signal comprising the updated aerospace vehicle's position and/or attitude, to said central processing unit.
  • 9. The module of claim 8 wherein said first digital signal comprises data from a sensor.
  • 10. The module of claim 9 wherein said first digital signal comprises said first digital signal comprises digitized imagery.
  • 11. An aerospace vehicle comprising: a) module according claim 5;b) a sensor pointed towards the earth;c) an internal and/or external power source for powering said aerospace vehicled) an onboard central processing unit; ande) a means to maneuver said aerospace vehicle.
  • 12. An aerospace vehicle according to claim 11 wherein said sensor comprises a camera and said means to maneuver said aerospace vehicle comprise at least one of a flight control surface, propeller, thruster, electric propulsion system, magnetorquer, and momentum wheel.
  • 13. An aerospace vehicle according to claim 12 wherein said means to maneuver said aerospace vehicle comprise at least one of a thruster, an electric propulsion system, magnetorquer, and momentum wheel.
  • 14. An aerospace vehicle comprising: a) module according claim 6;b) a sensor pointed towards the earth;c) an internal and/or external power source for powering said aerospace vehicle d) an onboard central processing unit; ande) a means to maneuver said aerospace vehicle.
  • 15. An aerospace vehicle according to claim 14 wherein said sensor comprises a camera and said means to maneuver said aerospace vehicle comprise at least one of a flight control surface, propeller, thruster, electric propulsion system, magnetorquer, and momentum wheel.
  • 16. An aerospace vehicle according to claim 15 wherein said means to maneuver said aerospace vehicle comprise at least one of a thruster, an electric propulsion system, magnetorquer, and momentum wheel.
Provisional Applications (1)
Number Date Country
62957250 Jan 2020 US