Fusion of Saliency Maps for Visual Attention Selection in Dynamic Scenes

Abstract

 Human vision system can optionally process the visual information and adjust the contradiction between the limited resources and the huge visual information. Building attention models similar to human visual attention system should be very beneficial to computer vision and machine intelligence; meanwhile, it has been a challenging task due to the complexity of human brain and limited understanding of the mechanisms underlying the human attention system. Previous studies emphasized on static attention, however the motion features, which are playing key roles in human attention system intuitively, have not been well integrated into the previous models. Motion features such as motion direction are assumed to be processed within the dorsal visual and the dorsal auditory pathways and there is no systematic approach to extract the motion cues well so far. In this paper, we proposed a generic Global Attention Model (GAM) system based on visual attention analysis. The computational saliency map is superimposed by a set of saliency maps via different predefined approaches. We added three saliencies maps up together to reflect dominant motion features into the attention model, i.e., the fused saliency map at each frame is adjusted by the top-down, static and motion saliency maps. By doing this, the proposed attention model accommodating motion feature into the system so that it can responds to real visual events in a manner similar to the human visual attention system in a realistic circumstance. The visual challenges used in our experiments are selected from the benchmark video sequences. We tested the GAM on several dynamic scenes, such as traffic artery, parachuter landing and surfing, with high speed and cluttered background. The experiment results showed the GAM system demonstrated high robustness and real-time ability under complex dynamic scenes. Extensive evaluations based on comparisons with other approaches of the attention model results have verified the effectiveness of the proposed system.

Authors and Affiliations

Jiawei Xu, Shigang Yue

Keywords

Related Articles

 An Expert System-Based Evaluation of Civics Education as a Means of Character Education Based on Local Culture in the Universities in Buleleng

 Civics education as a means of character education based on local culture has the mission to develop values and attitudes. In the educational process, various strategies and methods of value education can be used....

An interactive Tool for Writer Identification based on Offline Text Dependent Approach

Writer identification is the process of identifying the writer of the document based on their handwriting. The growth of computational engineering, artificial intelligence and pattern recognition fields owes greatly to o...

 Blocking Black Area Method for Speech Segmentation

 Speech segmentation is an important sub problem of automatic speech recognition. This research is concerned with the development of a continuous speech segmentation system using Bangla Language. This paper presents...

Color Radiomap Interpolation for Efficient Fingerprint WiFi-based Indoor Location Estimation

 Indoor location estimation system based on existing 802.11 signal strength is becoming increasingly prevalent in the area of mobility and ubiquity. The user-based location determination system utilizes the informat...

Introduction of the weight edition errors in the Levenshtein distance

In this paper, we present a new approach dedicated to correcting the spelling errors of the Arabic language. This approach corrects typographical errors like inserting, deleting, and permutation. Our method is inspired f...

Download PDF file
  • EP ID EP156599
  • DOI -
  • Views 120
  • Downloads 0

How To Cite

Jiawei Xu, Shigang Yue (2013).  Fusion of Saliency Maps for Visual Attention Selection in Dynamic Scenes. International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 48-58. https://europub.co.uk/articles/-A-156599