Computational Photography

Multiexposure image fusion is a crucial task in computer vision and photography, aiming to combine multiple images captured at different exposure levels into a single high-quality image. However, traditional fusion methods often suffer from challenges such as ghosting artifacts and loss of details. Machine learning techniques have emerged as a powerful tool to address these limitations. By leveraging deep learning models and advanced algorithms, it is possible to learn the underlying patterns and characteristics of multiexposure images, enabling the generation of fusion results with enhanced visual quality and reduced ghosting artifacts. These models can effectively capture the global and local information in the input images, leading to improved fusion performance. The integration of machine learning techniques with multiexposure image fusion opens up new possibilities for enhancing image quality and ensuring visually appealing results in various applications, such as HDR photography, surveillance systems, and medical imaging.

Reference

  1. K Ram Prabhakar et al., Self-Gated Memory Recurrent Network for Efficient Scalable HDR Deghosting, IEEE TCI 2021.
  2. K Ram Prabhakar et al., Labeled from Unlabeled: Exploiting Unlabeled Data for Few-shot Deep HDR Deghosting, CVPR 2021.
  3. K Ram Prabhakar et al., Towards Practical and Efficient High-Resolution HDR Deghosting with CNN, ECCV 2020.
  4. K Ram Prabhakar et al., DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs, ICCV 2017.