BACKGROUND OF THE INVENTION
Field of the Invention:
The present invention relates to inertial navigation and
global position. More specifically, the present invention relates to systems and
methods for improving the GPS counter-jamming performance of inertial navigation
Description of the Related Art:
Inertial navigation systems typically use gyroscopes and
accelerometers to provide precision vehicular navigation. Unfortunately, inertial
navigation accuracy degrades because of instrument calibration errors and other
errors. These navigation errors typically grow as a function of time. Independent
observations of the vehicle navigation information are needed to bind these navigation
errors. Therefore, sensors, other than INS, are needed in order to obtain independent
navigation information. Hence, a conventional approach for correcting these errors
involves the integration of a Global Position System (GPS) receiver with the inertial
navigation system. However, the GPS is vulnerable to jamming which can impede the
ability of the GPS system to correct the inertial navigation errors.
Typically, to counter the effects of GPS jamming, designers
have endeavored to: 1) improve the accuracy of the inertial navigation system and
2) make the GPS receiver resistant to jamming. However, these approaches are expensive
and limited in efficacy.
Hence, a need remains in the art for an effective yet inexpensive
system or method for improving the navigation accuracy of integrated inertial navigation
and Global Positioning Systems.
SUMMARY OF THE INVENTION
The need in the art is addressed by the position estimation
system of the present invention. In a most general implementation, the inventive
system includes a first arrangement for providing an image including a known target
in a known reference frame. A second arrangement correlates the image with a stored
image. The correlation is used to compute an error with respect to a position estimate.
In a specific embodiment, the error is referenced with
respect to first (x), second (y) and third (z) directions. A target location error
is computed with respect to a stored image provided by a target image catalog. The
target image catalog includes target geo-locations and digital terrain elevation
data. In an illustrative application, the image data is provided by synthetic aperture
radar or forward-looking infrared systems. An observation model and a measure noise
matrix are Kalman filtered to ascertain a position error in navigation data generated
by an integrated inertial navigation and Global Positioning system.
In the illustrative application, geo-registered SAR/FLIR
imagery is used to track targets and to determine a target location error (TLE).
This TLE information is a set of error equations that describe the relationship
between vehicle navigation information and target data. In accordance with the invention,
this relationship is used to form an observation model for vehicle navigation with
respect to target locations. Using Kalman filtering and the observation model, vehicle
navigation errors can be bound and the navigation accuracy of the vehicle can be
BRIEF DESCRIPTION OF THE DRAWINGS
DESCRIPTION OF THE INVENTION
- Fig. 1 is a block diagram of an illustrative implementation of a position estimation
system in accordance with the present teachings.
- Fig. 2 is a simplified flow diagram showing an illustrative implementation of
a position estimation method in accordance with the present teachings.
- Fig. 3 is a diagram showing a geo-sighting error model in three dimensions in
accordance with an illustrative application of the present teachings.
Illustrative embodiments and exemplary applications will
now be described with reference to the accompanying drawings to disclose the advantageous
teachings of the present invention.
While the present invention is described herein with reference
to illustrative embodiments for particular applications, it should be understood
that the invention is not limited thereto. Those having ordinary skill in the art
and access to the teachings provided herein will recognize additional modifications,
applications, and embodiments within the scope thereof and additional fields in
which the present invention would be of significant utility.
In general, in accordance with the present teachings, geo-registered
SAR/FLIR imagery is used to track targets and to determine a target location error
(TLE). The TLE is generated by a set of error equations that describe the relationship
between the vehicle (sensor) navigation information and the target location. The
geo-registered images obtained from SAR/FLIR systems provide position estimates
for a known pixel on the ground or other reference frame through target recognition
methods. This position estimate serves as an independent observation to bind errors
in an inertial navigation system.
U. S. Patent No.
entitled ON-BOARD NAVIGATION SYSTEM FOR AN AERIAL CRAFT INCLUDING A SYNTHETIC
APERTURE SIDEWAYS LOOKING RADAR issued January 16, 1996 to B. Falconnet (hereinafter
the "Falconnet" patent) the teachings of which are hereby incorporated herein by
reference appears to teach the use of SAR (Synthetic Aperture Radar) sensors to
obtain target imagery in the x and y horizontal plane. (See also U.
S. Patent No.
5,432,520 issued July 11, 1995 to Schneider et al.
and entitled SAR/GPS INERTIAL METHOD OF RANGE MEASUREMENT, the teachings
of which are herby incorporated herein by reference.) This imagery is then correlated
with maps of the geo-locations that are pre-stored in the database to obtain two
error equations in the x and y directions. These two error equations
serve as an observation model for the Kalman filter to bind the vehicle navigation
In accordance with the present invention, Falconnet's teachings
are extended by: 1) including a third dimensional axis, the altitude of a target
image location and 2) providing a specific teaching as to how the third dimension
can be used to improve the navigational accuracy of an integrated INS/GPS navigation
system. The geo-registered imagery is extended to sensors from SAR or FLIR (forward-looking
infrared) systems. A simple first order error model in the computed target geo-location
is used to illustrate the effectiveness of Kalman filter updating using geo-registration
imagery. Detailed x, y, and z observation equations are provided which involve the
vehicle's position, velocity, and attitude, as well as the angle between the horizontal
plane and the slant plane. The position error differences can be minimized through
optimal estimation techniques, such as Kalman filter, to bind INS navigation errors.
The equations form an observation matrix in a Kalman filter.
The method described in this invention can also be extended
to any sensor on the vehicle that produces target location errors (TLE) on the known
target image because the TLE equations can be reduced to a set of equations related
to the vehicle navigation errors and target image position errors.
Fig. 1 is a block diagram of an illustrative implementation
of a position estimation system in accordance with the present teachings. The inventive
system 10 includes an antenna array 12 which feeds a synthetic aperture radar (SAR)
sensor 14. The sensor 14 is also adapted to process FLIR images. In accordance with
the present teachings, the images provided by the SAR/FLIR sensor 14 are input to
an image processor 16. The image processor 16 uses data from a catalog of geo-registered
features 18 to identify a target in a known reference frame as discussed more fully
below. Those skilled in the art will appreciate that the known reference frame may
be a surface other than the surface of the earth without departing from the scope
of the present invention. The output of the image processor 16 is input to an INS/GPS
integrator 20. The integrator 20 includes an INS processor 22, a GPS processor 26
and a Kalman filter 28. The INS processor 22 receives vehicle motion data from an
on-board inertial measurement unit 24. The GPS processor 26 receives a GPS signal
along with noise and, in some environments, a GPS jamming signal from a GPS receiver
25. The integrator 20 outputs vehicle position, velocity, and attitude errors to
a guidance processor 30. The guidance processor 30 outputs vehicle position, velocity
and attitude information, corrected in accordance with the present teachings, for
use in a conventional manner. The guidance processor 30 also feeds position, velocity,
and velocity errors and range from the sensor to the target, back to the SAR/FLIR
sensor 14 for generating TLE equations.
Fig. 2 is a simplified flow diagram showing an illustrative
implementation of a position estimation method in accordance with the present teachings.
In the best mode, the method 100 is implemented in software by the image processor
16 of Fig. 1. As illustrated in Fig. 2, the method 100 includes the step 102 of
providing a GPS signal from the GPS receiver 25 via the GPS processor 26 (Fig. 1)
and the step 104 of providing vehicle position, velocity, attitude and errors from
the IMU 24 via the INS processor 22 (Fig. 1). This data is analyzed by jamming logic
at step 106 to ascertain whether a jamming signal 'J' is present and whether the
jamming signal J exceeds a threshold 'T'.
If J < T, then the data from the INS and GPS processors
22 and 26 is input to the Kalman filter 28 (Fig. 1) directly at step 116.
If, however, J ≥ T, then at step 108 the system
10 generates target location error (TLE) equations drawing data from the catalog
of geo-registered features 18 (Fig. 1) at step 110. Next, at step 112, the coordinates
generated at step 108 are transformed. At step 114, an observation matrix (H) and
a measurement noise matrix (R) are generated. At step 116, this data is passed to
the Kalman filter to bind the navigation error as discussed above. At step 118,
the output of the Kalman filtering step 116 is used in a guidance process in a conventional
Returning to Fig. 1, the catalog 18 is defined as the image
and location of each target image. This catalog details geo-locations and DTED (Digital
Terrain Elevation Data). Based on the vehicle trajectory supplied by the INS/GPS
process, the image processor 16 determines which geo-target in the catalog 18 needs
to be sighted. The processor 16 then sends this information to the SAR/FLIR sensor
14 to locate and then to execute the geo-registered imagery. The SAR/FLIR sensor
14 determines the location of the geo-target and correlates this imagery with the
information in the catalog to determine the accuracy of the observation.
Errors in computed target geo-location can be primarily
attributed to three major sources: sensor position errors, sensor bearing errors
and DTED errors. In accordance with the present teachings, these errors are treated
as being statistically independent and zero mean Gaussian. For the sake of simplicity,
all other errors are assumed to be relatively small and are ignored. It is also
assumed that the image of the target can be acquired. The DTED noise is treated
herein as part of the measurement noise in the z-direction of an ECEF (Earth Center
Earth Fixed) coordinate frame.
Errors in the sensor are directly transformed into the
slant coordinate frame and then transformed to the platform coordinate frame. The
slant coordinate frame is defined as follows: xs is along the vehicle
, the zs is perpendicular to the slant plane which is the plane that
, and ys forms a right-hand coordinate system.
Therefore, errors due to the sensor position errors in
the slant coordinate frame are derived as follows:
are the vectors of the vehicle navigation position and velocity errors, respectively,
in the body coordinate frame.
is the velocity vector of the vehicle,
is the range vector from the vehicle position to the known geo-location, and &phgr;
is the angle between the vehicle flight path and line of sight between vehicle and
the target image.
The errors due to sensor bearing errors, in the slant coordinate
frame, are derived as follows:
Next, we combine the errors provided by equations  and
 to obtain the following errors dxs, dys, and dzs
that are in the slant coordinate frame.
Next, converting these errors, dxs, dys, and dzs
into the platform coordinate frame by the angle &psgr; (see Figure 3).
Fig. 3 is a diagram showing a geo-sighting error model
in three dimensions in accordance with an illustrative application of the present
teachings. The platform coordinate frame (vehicle body coordinate frame) is defined
as follows: xp is vehicle velocity direction, yp is the vehicle right
wing, and zp forms a right hand coordinate system. The relationship between
the slant coordinate frame and the platform coordinate frame is as follows:
Inserting equation  into equation :
The classical Kalman filter is described as follows:
where &PHgr;k is the transition matrix.
In accordance with the present teachings, an observation
matrix Hk and the observation (measurement) noise Rk are generated
as follows. Assume that &Dgr;
are the first six Kalman filter error states defined as &Dgr;
= (&dgr;rx, &dgr;ry, &dgr;rz)T
= (&dgr;Vx, &dgr;Vy, &dgr;Vz)T
in the platform coordinate frame where superscript T denotes the transpose of the
vector. Note that if &Dgr;
are defined in the ECEF coordinate frame, then these error states need to be transformed
into the platform frame where the error equations  are defined. The Kalman filter
, observation matrix Hk, and measurement noise matrix Rk are
denote as below:
Assume that the vectors
= (Rx, Ry, Rz), V =|
), R =|
Therefore, equation  can be expressed in the following
Therefore, the elements in the observation matrix are as
and r00, r11, and r22 represent the observation
noise in the SAR/FLIR imager accuracy during the geo-target scan. The DTED noise
is included in r22 term.
In the illustrative embodiment, the sensor position and
attitude errors are converted into the slant coordinate frame and then into the
platform frame. Errors that may be due to the SAR signal processing and ranging
errors are ignored since these errors are assumed to be relatively small. If some
of those errors are large enough to be considered, the same method can be used to
transform these errors into the platform frame and obtain similar results as equation
The error states in the Kalman filter should include the
vehicle navigation errors such as
The error equations above can be converted to an observation matrix in the Kalman
filer as follows:
where hij are defined above.
In general, coordinate frames of the sensor errors and
navigation error states in the Kalman filter are different. They need to transform
into the same coordinate frame.
The error equations in Claim 1 are, in fact, a TLE in the
platform coordinate frame. Based on this invention, any TLE can be extracted and
converted into an observation matrix as described in equation (5).
If the sensor SAR/FLIR can only obtain two-dimensional
images (dxp, dyp), the observation matrix will be a two-dimensional
where the elements hij are defined as before.
Thus, the present invention has been described herein with
reference to a particular embodiment for a particular application. Those having
ordinary skill in the art and access to the present teachings will recognize additional
modifications applications and embodiments within the scope thereof.
It is therefore intended by the appended claims to cover
any and all such applications, modifications and embodiments within the scope of
the present invention.