This paper proposes a saliency-aware fusion algorithm for integrating infrared (IR) and visible light (ViS) images (or videos) with the aim to enhance the visualization of the latter. Our algorithm involves saliency detection followed by a biased fusion. The goal of the saliency detection is to generate a saliency map for the IR image, highlighting the co-occurrence of high brightness values (“hot spots”) and motion. Markov Random Fields (MRFs) are used to combine these two sources of information. The subsequent fusion step is employed to bias the end result in favor of the ViS image, except when a region shows clear IR saliency, in which case the IR image gains (local) dominance. By doing so, the fused image succeeds in depicting both the salient foreground object (gleaned from the IR image), against as an easily recognizable background as supplied by the ViS image. An evaluation of the proposed saliency detection method indicates improvements in detection accuracy when compared to state-of-the-art alternatives. Moreover, both objective and subjective assessments reveal the effectiveness of the proposed fusion algorithm in terms of visual context enhancement.
Additional Metadata
Keywords Image fusion, saliency, multi-modal image processing
THEME Software (theme 1)
Publisher Elsevier
Persistent URL dx.doi.org/10.1016/j.neucom.2012.12.015
Journal Neurocomputing
Project Fire Detection and Management through a Multi-Sensor Network for the Protection of Cultural Heritage Areas from the Risk of Fire and Extreme Weather Conditions
Citation
Han, J, Pauwels, E.J, & de Zeeuw, P.M. (2013). Fast Saliency-aware Multi-modality Image Fusion. . Neurocomputing, 111, 70–80. doi:10.1016/j.neucom.2012.12.015