Project page template is borrowed from DreamFusion.
Based on the complementary vision sensor (CVS), Tianmouc, we propose $\textbf{S}$patio-$\textbf{T}$emporal Difference $\textbf{G}$uided $\textbf{D}$eblur $\textbf{N}$et (STGDNet) for motion deblurring. It achieves strong performance in real-world extreme blur scenarios.
Drag the slider to compare the blurred input with our deblurred result.
Given a single blurred frame as input, our method reconstructs the motion within the exposure time and generates a clear video.
Project page template is borrowed from DreamFusion.