Vision-based relative state estimation of non-cooperative spacecraft under modeling uncertainty

Shai Segal, Avishy Carmi, Pini Gurfil

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

56 Scopus citations


Estimating the relative pose and motion of cooperative satellites using on-board sensors is a challenging problem. When the satellites are non-cooperative, the problem becomes far more complicated, as there might be poor or no a priori information about the motion or structure of the target satellite. In this work we develop robust algorithms for solving the said problem by assuming that only visual sensory information is available. Using two cameras mounted on a chaser satellite, the relative state of a target satellite, including the position, attitude, and rotational and translational velocities is estimated. Our approach employs a stereoscopic vision system for tracking a set of feature points on the target spacecraft. The perspective projection of these points on the two cameras constitutes the observation model of an EKF-based filtering scheme. In the final part of this work, the relative motion filtering algorithm is made robust to uncertainties in the inertia tensor. This is accomplished by endowing the plain EKF with a maximum a posteriori identification scheme for determining the most probable inertia tensor from several available hypotheses.

Original languageEnglish
Title of host publication2011 Aerospace Conference, AERO 2011
StatePublished - 13 May 2011
Externally publishedYes
Event2011 IEEE Aerospace Conference, AERO 2011 - Big Sky, MT, United States
Duration: 5 Mar 201112 Mar 2011

Publication series

NameIEEE Aerospace Conference Proceedings
ISSN (Print)1095-323X


Conference2011 IEEE Aerospace Conference, AERO 2011
Country/TerritoryUnited States
CityBig Sky, MT

ASJC Scopus subject areas

  • Aerospace Engineering
  • Space and Planetary Science


Dive into the research topics of 'Vision-based relative state estimation of non-cooperative spacecraft under modeling uncertainty'. Together they form a unique fingerprint.

Cite this