KMS KUNMING INSTITUTE OF ZOOLOGY.CAS
| CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map | |
| Wang, X; Guan, Z; Qian, WH; Cao, JD; Liang, S; Yan, J | |
| 2024 | |
| 发表期刊 | INFORM FUSION
![]() |
| ISSN | 1566-2535 |
| 卷号 | 102 |
| 摘要 | In infrared and visible image fusion (IVIF), prior knowledge constraints established with image-level information often ignore the identity and differences between source image features and cannot fully utilize the complementary information role of infrared images to visible images. For this purpose, this study develops a Contrastive learning-based Self -Supervised fusion model (CS2Fusion), which considers infrared images as a complement to visible images, and develops a Compensation Perception Network (CPN) to guide the backbone network to generate fusion images by estimating the feature compensation map of infrared images. The core idea behind this method is based on the following observations: (1) there is usually a significant disparity in semantic information between different modalities; (2) despite the large semantic differences, the distribution of self-correlation and saliency features tends to be similar among the same modality features. Building upon these observations, we use self-correlation and saliency operation (SSO) to construct positive and negative pairs, driving CPN to perceive the complementary features of infrared images relative to visible images under the constraint of contrastive loss. CPN also incorporates a self-supervised learning mechanism, where visually impaired areas are simulated by randomly cropping patches from visible images to provide more varied information of the same scene to form multiple positive samples to enhance the model's fine-grained perception capability. In addition, we also designed a demand-driven module (DDM) in the backbone network, which actively queries to improve the information between layers in the image reconstruction, and then integrates more spatial structural information. Notably, the CPN as an auxiliary network is only used in training to drive the backbone network to complete the IVIF in a self-supervised form. Experiments on various benchmark datasets and high-level vision tasks demonstrate the superiority of our CS2Fusion over the state-of-the-art IVIF method. |
| 收录类别 | SCI |
| 语种 | 英语 |
| 文献类型 | 期刊论文 |
| 条目标识符 | http://ir.kiz.ac.cn/handle/152453/14705 |
| 专题 | 昆明动物研究所 |
| 推荐引用方式 GB/T 7714 | Wang, X,Guan, Z,Qian, WH,et al. CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map[J]. INFORM FUSION,2024,102. |
| APA | Wang, X,Guan, Z,Qian, WH,Cao, JD,Liang, S,&Yan, J.(2024).CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map.INFORM FUSION,102. |
| MLA | Wang, X,et al."CS2Fusion: Contrastive learning for Self-Supervised infrared and visible image fusion by estimating feature compensation map".INFORM FUSION 102(2024). |
| 条目包含的文件 | ||||||
| 文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
| QT2025041013.pdf(5309KB) | 期刊论文 | 出版稿 | 开放获取 | CC BY-NC-SA | 请求全文 | |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论