Visual tracking based on transfer learning of deep salience information

Journal Title: Opto-Electronic Advances - Year 2020, Vol 3, Issue 9

Abstract

In this paper, we propose a new visual tracking method in light of salience information and deep learning. Salience detection is used to exploit features with salient information of the image. Complicated representations of image features can be gained by the function of every layer in convolution neural network (CNN). The characteristic of biology vision in attention-based salience is similar to the neuroscience features of convolution neural network. This motivates us to improve the representation ability of CNN with functions of salience detection. We adopt the fully-convolution networks (FCNs) to perform salience detection. We take parts of the network structure to perform salience extraction, which promotes the classification ability of the model. The network we propose shows great performance in tracking with the salient information. Compared with other excellent algorithms, our algorithm can track the target better in the open tracking datasets. We realize the 0.5592 accuracy on visual object tracking 2015 (VOT15) dataset. For unmanned aerial vehicle 123 (UAV123) dataset, the precision and success rate of our tracker is 0.710 and 0.429.

Authors and Affiliations

Haorui Zuo*, Zhiyong Xu*, Jianlin Zhang, Ge Jia

Keywords

Related Articles

On-chip readout plasmonic mid-IR gas sensor

Gas identification and concentration measurements are important for both understanding and monitoring a variety of phenomena from industrial processes to environmental change. Here a novel mid-IR plasmonic gas sensor wit...

Laser machining of transparent brittle materials: from machining strategies to applications

Transparent brittle materials such as glass and sapphire are widely concerned and applied in consumer electronics, optoelectronic devices, etc. due to their excellent physical and chemical stability and good transparency...

Imaging the crystal orientation of 2D transition metal dichalcogenides using polarization-resolved second-harmonic generation

We use laser-scanning nonlinear imaging microscopy in atomically thin transition metal dichalcogenides (TMDs) to reveal information on the crystalline orientation distribution, within the 2D lattice. In particular, we pe...

Direct laser interference patterning of nonvolatile magnetic nanostruc-tures in Fe60Al40 alloy via disorder-induced ferromagnetism

Current magnetic memories are based on writing and reading out the domains with opposite orientation of the magnetization vector. Alternatively, information can be encoded in regions with a different value of the saturat...

An accurate design of graphene oxide ultrathin flat lens based on Rayleigh-Sommerfeld theory

Graphene oxide (GO) ultrathin flat lenses have provided a new and viable solution to achieve high resolution, high efficiency, ultra-light weight, integratable and flexible optical systems. Current GO lenses are designed...

Download PDF file
  • EP ID EP693127
  • DOI 10.29026/oea.2020.190018
  • Views 95
  • Downloads 0

How To Cite

Haorui Zuo*, Zhiyong Xu*, Jianlin Zhang, Ge Jia (2020). Visual tracking based on transfer learning of deep salience information. Opto-Electronic Advances, 3(9), -. https://europub.co.uk/articles/-A-693127