KIZ OpenIR
CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map
Wang, X; Guan, Z; Qian, WH; Cao, JD; Liang, S; Yan, J
2024
发表期刊INFORM FUSION
ISSN1566-2535
卷号102
摘要In infrared and visible image fusion (IVIF), prior knowledge constraints established with image-level information often ignore the identity and differences between source image features and cannot fully utilize the complementary information role of infrared images to visible images. For this purpose, this study develops a Contrastive learning-based Self -Supervised fusion model (CS2Fusion), which considers infrared images as a complement to visible images, and develops a Compensation Perception Network (CPN) to guide the backbone network to generate fusion images by estimating the feature compensation map of infrared images. The core idea behind this method is based on the following observations: (1) there is usually a significant disparity in semantic information between different modalities; (2) despite the large semantic differences, the distribution of self-correlation and saliency features tends to be similar among the same modality features. Building upon these observations, we use self-correlation and saliency operation (SSO) to construct positive and negative pairs, driving CPN to perceive the complementary features of infrared images relative to visible images under the constraint of contrastive loss. CPN also incorporates a self-supervised learning mechanism, where visually impaired areas are simulated by randomly cropping patches from visible images to provide more varied information of the same scene to form multiple positive samples to enhance the model's fine-grained perception capability. In addition, we also designed a demand-driven module (DDM) in the backbone network, which actively queries to improve the information between layers in the image reconstruction, and then integrates more spatial structural information. Notably, the CPN as an auxiliary network is only used in training to drive the backbone network to complete the IVIF in a self-supervised form. Experiments on various benchmark datasets and high-level vision tasks demonstrate the superiority of our CS2Fusion over the state-of-the-art IVIF method.
收录类别SCI
语种英语
文献类型期刊论文
条目标识符http://ir.kiz.ac.cn/handle/152453/14705
专题昆明动物研究所
推荐引用方式
GB/T 7714
Wang, X,Guan, Z,Qian, WH,et al. CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map[J]. INFORM FUSION,2024,102.
APA Wang, X,Guan, Z,Qian, WH,Cao, JD,Liang, S,&Yan, J.(2024).CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map.INFORM FUSION,102.
MLA Wang, X,et al."CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map".INFORM FUSION 102(2024).
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
QT2025041013.pdf(5309KB)期刊论文出版稿开放获取CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, X]的文章
[Guan, Z]的文章
[Qian, WH]的文章
百度学术
百度学术中相似的文章
[Wang, X]的文章
[Guan, Z]的文章
[Qian, WH]的文章
必应学术
必应学术中相似的文章
[Wang, X]的文章
[Guan, Z]的文章
[Qian, WH]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。