Hyperspectral Video Target Tracking Based on Deep Features with Spectral Matching Reduction and Adaptive Scale 3D Hog Features

Zhe Zhang, Xuguang Zhu, Dong Zhao, Pattathal V. Arun, Huixin Zhou, Kun Qian, Jianling Hu

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Hyperspectral video target tracking is generally challenging when the scale of the target varies. In this paper, a novel algorithm is proposed to address the challenges prevalent in the existing hyperspectral video target tracking approaches. The proposed approach employs deep features along with spectral matching reduction and adaptive-scale 3D hog features to track the objects even when the scale is varying. Spectral matching reduction is adopted to estimate the spectral curve of the selected target region using a weighted combination of the global and local spectral curves. In addition to the deep features, adaptive-scale 3D hog features are extracted using cube-level features at three different scales. The four weak response maps thus obtained are then combined using adaptive weights to yield a strong response map. Finally, the region proposal module is utilized to estimate the target box. The proposed strategies make the approach robust against scale variations of the target. A comparative study on different hyperspectral video sequences illustrate the superior performance of the proposed algorithm as compared to the state-of-the-art approaches.

Original languageEnglish
Article number5958
JournalRemote Sensing
Volume14
Issue number23
DOIs
StatePublished - 1 Dec 2022
Externally publishedYes

Keywords

  • adaptive scale 3D hog
  • adaptive weight
  • hyperspectral video target tracking
  • region proposal module
  • spectral matching reduction

ASJC Scopus subject areas

  • General Earth and Planetary Sciences

Fingerprint

Dive into the research topics of 'Hyperspectral Video Target Tracking Based on Deep Features with Spectral Matching Reduction and Adaptive Scale 3D Hog Features'. Together they form a unique fingerprint.

Cite this