Reconstruction/synthesis of 3D virtual worlds from segmented monocular image sequences of urban driving datasets
Project Principal Investigator(s): Jaya Sreevalsan Nair
A research project on generating 3D virtual worlds using a sequence of segmented monocular images of urban driving datasets to re-create the scene as well as the motion/animation of the moving vehicle in a 3D virtual scene. We have datasets of image sequences, which are frames from videos shot by camera mounted on a moving vehicle. The idea is to perform depth estimation of objects in a given image using machine learning, and perform a realistic reconstruction or synthesis of a 3D virtual world using the knowledge of objects in the scene.