The lecture and exercises will cover 3D reconstruction from various input modalities (Webcams, RGB-D cameras (Kinect, Realsense, …). It will start with basic concepts of what is 3D, the different representations, how to capture 3D and how the devices and sensors function. Based on this introduction, rigid and non-rigid tracking and reconstruction will be discussed. Specialized face and body tracking methods will be covered and the applications of the 3D reconstruction and tracking will be shown. In addition to the 3D surface reconstruction, techniques for appearance modelling and material estimation will be shown.


  • Basic concepts of geometry (Meshes, Point Clouds, Pixels & Voxels)
  • RGB and Depth Cameras (Calibration, active/passive stereo, Time of Flight (ToF), Structured Light, Laser Scanner, Lidar)
  • Surface Representations (Polygonal meshes, parametric surfaces, implicit surfaces (Radial basis functions, signed distance functions, indicator function), Marching cubes)
  • Overview of reconstruction methods (Structure from Motion (SfM), Multi-view Stereo (MVS), SLAM, Bundle Adjustment)
  • Rigid Surface Tracking & Reconstruction (Pose alignment, ICP, online surface reconstruction pipeline (KinectFusion), scalable surface representations (VoxelHashing, OctTrees), loop closures and global optimization)
  • Non-rigid Surface Tracking & Reconstruction (Surface deformation for modeling, Regularizers: ARAP, ED, etc., Non-rigid surface fitting: e.g., non-rigid ICP. Non-rigid reconstruction: DynamicFusion/VolumeDeform/KillingFusion)
  • Face Tracking & Reconstruction (Keypoint detection & tracking, Parametric / Statistical Models -> BlendShapes)
  • Body Tracking & Reconstruction (Skeleton Tracking and Inverse Kinematics, Marker-based motion capture)
  • Material capture (Lightstage, BRDF estimation)
  • Outlook DeepLearning-based tracking