This application claims priority to German Patent Application No. 102 36 570.9, which was filed Aug. 8, 2002.
The present invention relates to a method for the fuel-optimized selection of a configuration of thrusters on a spacecraft.
Such a method of fuel optimized selection is known for example from U.S. Pat. No. 6,347,262 B1 for the case of a spin-stabilized spacecraft. A configuration of thrusters on a spacecraft, as considered by the present invention, serves in particular the attitude and position correction of the spacecraft. Such an attitude and position correction via the thrusters is known for example from EP 0 750239 A2.
From EP 0 977 687 we know of a special method for the low-fuel control of an arrangement of thrusters on a spacecraft, wherein for the purpose of finding a low-fuel solution for the control of a convex linear optimization problem is resolved through
In this method a dual simplex algorithm is applied, which is supposed to find an optimal solution to the problem through a largely unfocused search method, wherein however it is possible with this method that there is no solution for the currently existing force-momental vector and the present thruster arrangement.
From N. Karmakar: A new polynominal time algorithm for linear programming, Combinatorica 4 (4), 1984, p. 373–395 we know of a basic method for solving linear optimization problems of a general form.
It is an object of the present invention to offer a method for the fuel-optimized selection of a configuration of thrusters on a spacecraft, which permits a focused search of a solution to the linear optimization problem that is permissible in any case. This object is achieved through the features described herein.
The present invention relates to a method for the fuel-optimized selection of a configuration of thrusters on a spacecraft, wherein for the purpose of finding a fuel-optimized solution for the selection process a linear optimization problem, particularly a convex linear optimization problem, is resolved, through
Pursuant to the invention it is provided that
By applying a scaled iteration gradient, instead of a mere search method as in the state of the art, a focused locating process for the optimal solution can take place. When forming the scaled iteration gradient, at least one boundary value condition for a permissible solution may be included. Application of a scaled iteration gradient also largely excludes the circumstance when only a suboptimal solution to the linear optimization problem is found. The fact that linear problems can involve, in particular, a so-called convex problem is basically known, see, for example, the chapter “Linear Programming” at the following internet link of the European Business School of the Schloss Reichartshausen University:
http://www.ebs.de/Lehrstuehle/Wirtschaftsinformatik/NEW/Courses/Semester2/Math2/.
By taking the boundary value conditions into account within the framework of the limiting factor, the next iteration solution may be determined. This solution is within a permissible range of values, because the limiting factor allows the iteration step width to be adjusted accordingly so that a boundary value condition is not violated. With a possible consideration of the boundary value conditions within the framework of forming the scaled iteration gradient, the gradient direction may be selected such that a solution that is within a permissible range of values is determined as the next iteration solution.
Furthermore, for the present linear optimization problems it is known that the optimal permissible solution, which corresponds to a coordinate point in a multidimensional space of all permissible solutions that is limited by boundary conditions, is located on the boundary of said limited space. In this manner, the scaled iteration gradient and the limiting factor are preferably adjusted such that an iterative approximation of an optimal point on the delimitation of the multidimensional space of the permissible solutions occurs.
It may now be provided in particular that an upper bound for a permissible solution is defined as a boundary value condition.
Moreover it may be advantageously provided that the iteration gradient is determined with the help of a Gauss elimination, which represents a very fast method.
It may also be provided in particular that in every iteration step a scaling of the iteration gradient takes place such that a gradient component becomes smaller the closer the corresponding component of the result of the previous iteration step comes to a boundary value condition. In this way, a new scaling operation of the iteration gradient is performed in every iteration step, wherein certain components of the gradient disappear when the corresponding components of the previous iteration solution come very close to a boundary value condition, for example, they become smaller than a first pre-defined distance. Said first distance can also be selected to be infinitesimally small.
Furthermore it may be provided that the iteration phase is terminated as soon as the result of an iteration step exceeds at least one boundary value condition and that the result of the previous iteration step is determined as an optimal solution of the effectiveness criterion. Thus the iteration is terminated if the algorithm leaves the range of permissible solutions, and the last permissible solution is determined as the optimal solution. In this way it is guaranteed in a simple manner that in any case the solution that is determined as the final result of the method is as optimal as possible and is simultaneously permissible.
The iteration phase, however, may also be terminated as soon as the iteration method converges against a permissible solution and the result of a certain iteration step differs from the result of a previous iteration step by less than a second predefined distance, wherein the result of the last iteration step is determined as an optimal solution of the effectiveness criterion.
A preferred embodiment of the present invention is presented herein. See also
A method for the fuel-optimized selection of an arrangement of thrusters on a spacecraft is considered, which is used for attitude and position control of the spacecraft. In order to generate forces and moments that are applied on a spacecraft, for examples in order to be able to govern translation and rotation simultaneously during a docking phase or any other attitude and position control, n≧7 thrusters are required. The appropriate control signals must then meet the requirements
Furthermore, with more than 7 thrusters an effectiveness criterion, which corresponds in general to fuel consumption, may be optimized.
The mathematical formulation thus leads to the following linear optimization problem (LOP):
To apply all LOP solution methods, one permissible solution must be found in an initialization phase, i.e., a vector az, which fulfills (1a) and (1b). With the so-called singular value decomposition (abbreviated SVD) of Tc
(2) reveals the following:
From the fact that with the thruster set it must be possible to realize both positive and negative r, it results from (2) that c1 and c2 must exist so that
(a) a1:=U1s1+U2c1≧0
(b) a2:=U1(−s1)+U2c2≧0 (3)
(c)→U2(c1+c2)=:U2cp>0
wherein ε for numerical reasons was introduced for application of the following optimization steps.
For large right sides r it is possible that (1a, b) has no solution a≦f, therefore the problem that is expanded by xs is considered
This now also allows the upper bound to be adhered to and allows the required permissible starting value for a to be calculated for
ε here represents a first, for example, infinitesimal, distance.
All subsequent considerations relate to the expanded system (6), wherein however the original description pursuant to (1) is maintained for reasons of simplicity.
To resolve the LOP a second procedural step now follows, namely an optimization of the efficiency criterion (1c) and/or (6d), which is performed iteratively as follows:
In this vgi represents the iteration gradient, which is scaled in every iteration step, i.e., in each iteration step the gradient direction is newly determined. Moreover, k represents a limiting factor for the iteration step width, which is determined as follows:
This selection of k while taking the boundary value condition 0≦a≦f into account ensures that ai+1 remains permissible.
The essential idea in (8) is the constant scaling of the problem with Di and the subsequent continuation into the thus modified, on U2(i) projected, negative gradient direction, wherein, due to the familiar structure of the problem as a convex linear optimization problem with an optimal solution on the boundary, it is guaranteed that the efficiency criterion is reduced in every iteration step. The iteration is preferably interrupted when the amount of vgi·k drops below a specified threshold as a second distance, i.e., ai hardly changes any more.
A particular expansion of the present method as compared with the method in Karmakar consists in taking an additional boundary value problem into account with every iteration step, here the inclusion of the upper bound f (upper bound problem) by adding the second factor in Di and taking the upper bound f into consideration within the framework of the term k0 in the calculation of k pursuant to (8b). Up to now, in the Karmakar methods usually complex expansions of the linear optimization problem with slack variables were offered with the disadvantage that the dimension of the problem that needs to be solved is increased considerably. Here the present method represents an essential simplification. Additionally it eliminates the very complex determination of a permissible solution in the initialization phase, as practiced in Karmakar, through the suggested initialization phase, which is better adapted to the present problem.
Another advantageous procedural step of the method described here is thus in the constant calculation of vgi, which preferably occurs not through the SVD of TcDi, but due to (here we use the following simplified depiction: Tc for TcDi, D for Di)
via the solution from (9d) for x by means of a Gauss elimination. This method is clearly faster than an SVD. For a0 as well, U1 is not determined, but U1s is calculated directly pursuant to
U1s=U1Σ−1VTr=TcTM−1r (e)
Thus the method consists in the execution of the calculation steps
Finally additional advantages of the suggested method compared with a simplex method pursuant to the state of the art are summarized in the following:
Number | Date | Country | Kind |
---|---|---|---|
102 36 570 | Aug 2002 | DE | national |
Number | Name | Date | Kind |
---|---|---|---|
5130931 | Paluszek et al. | Jul 1992 | A |
5195172 | Elad et al. | Mar 1993 | A |
5310143 | Yocum et al. | May 1994 | A |
5428712 | Elad et al. | Jun 1995 | A |
6208915 | Schutte et al. | Mar 2001 | B1 |
6347262 | Smay et al. | Feb 2002 | B1 |
6823675 | Brunell et al. | Nov 2004 | B1 |
Number | Date | Country |
---|---|---|
0 750 239 | Dec 1996 | EP |
0750239 | Dec 1996 | EP |
0977687 | May 2001 | EP |
WO 9849058 | Nov 1998 | WO |
Number | Date | Country | |
---|---|---|---|
20050080521 A1 | Apr 2005 | US |