A Review of Feed-Forward 4D Reconstruction for Connected Autonomous Driving

  • Type:Bachelor/Master’s Thesis
  • Date:Immediately
  • Supervisor:

    Lei Wan

Background:

Autonomous driving systems require aconsistent understanding of dynamic environments, including both 3D geometry and temporal motion. This has motivated increasing research interest in 4D reconstruction, which models full spatio-temporal scenes (3D + time). Traditional 4D reconstruction pipelines typically rely on multi-stage optimization, explicit pose estimation, and iterative neural rendering, making them computationally expensive and difficult to scale to real-world driving scenarios. Recently, feed-forward 4D reconstruction has emerged as a new paradigm. Transformer-based methods such as Any4D and DGGT can reconstruct dynamic driving scenes directly in a single forward pass, without per-scene optimization. These approaches achieve real-time inference, strong generalization, and explicit scene representations, making them highly promising for autonomous driving perception. The main objective of this thesis is to systematically study and benchmark feed-forward 4D reconstruction methods in autonomous driving scenarios.


Your Tasks:
  • Conduct a structured literature review of recent feed-forward 4D reconstruction methods
  • Reproduce and evaluate selected methods on public autonomous driving datasets
  • Perform benchmark comparisons in terms of reconstruction quality, motion consistency, generalization ability, and inference efficiency.

Your Profile:
  • Solid background in deep learning and computer vision.
  • Experience with PyTorch and Python.
  • Basic knowledge of 3D geometry or multi-view vision.
  • Independent, research-oriented mindset and the ability to explore open-ended problems.


Start date: Immediately

Duration: As per the applicable examination regulations.


If you are interested or have any questions regarding this thesis position, feel free to contact.