Spatio-Temporal Difference Guided Motion Deblurring with the Complementary Vision Sensor

Yapeng Meng$^\dagger$

Tsinghua University

Lin Yang$^\dagger$

Communication University of China

Yuguo Chen

Tsinghua University

Xiangru Chen

Tsinghua University

Taoyi Wang

PrimeVision

Lijian Wang

Tsinghua University

Zheyu Yang

PrimeVision

Yihan Lin$^*$

Xiamen University

Rong Zhao$^*$

Tsinghua University
Conference on Computer Vision and Pattern Recognition (CVPR) 2026
$^\dagger$ equal contribution    $^*$ corresponding author


Abstract

Based on the complementary vision sensor (CVS), Tianmouc, we propose $\textbf{S}$patio-$\textbf{T}$emporal Difference $\textbf{G}$uided $\textbf{D}$eblur $\textbf{N}$et (STGDNet) for motion deblurring. It achieves strong performance in real-world extreme blur scenarios.


Model Overview

Model Overview

Real-World Deblurring Results

Drag the slider to compare the blurred input with our deblurred result.


Single-frame to Video

Given a single blurred frame as input, our method reconstructs the motion within the exposure time and generates a clear video.


Project page template is borrowed from DreamFusion.