Multi-modal crash prediction based on V2X and visual information

Description

Human motion trajectory prediction is a key element for intelligent autonomous systems to interact with humans [1]. This statement is especially true when it comes to preventing accidents between intelligent vehicles (IV) and pedestrians. One promising approach to reduce this kind of accidents is V2X technology [2].

This work aims to combine V2X technology with approaches to predict pedestrians' motion based on visual information. More concretely, as vehicle we will use an eBike that sends status information about its position, speed and direction using V2X (e.g. Cooperative Awareness Messages (CAM) [3]). Further, we will use a V2X equipped social robot that operates in the space of the pedestrian. The advantage of the robot lies in its capability to serve as an interface between communication based on V2X and social interaction with pedestrians. It can further use visual perception to predict pedestrians' trajectories. When combining this prediction with the status received from the eBike the robot can predict potential crashes between the vehicle and the pedestrian. Based on these predictions the robot can then communicate either toward the vehicle to change its behaviour using V2X or using social interaction to communicate a stop towards pedestrian.

Building upon an existing proof-of-concept of the described interaction, you will improve the existing pipeline. More concretly, you will:

  • familiarize with the state-of-the-art of V2X communication and pedestrian motion prediction;
  • implement (or integrate existing implementations) one or more prediction approaches;
  • validate your implementation in an experiment.

This project requires you to be proficient in python and/or C++ programming. Experience with computer vision is highly recommended. Basic knowledge about the Robot Operating System (ROS) and experience with real robots is advantageous but not necessary.

 

References

  1. Rudenko, Andrey, Luigi Palmieri, Michael Herman, Kris M. Kitani, Dariu M. Gavrila, and Kai O. Arras. “Human Motion
    Trajectory Prediction: A Survey.” The International Journal of Robotics Research 39, no. 8 (July 2020): 895–935.
    https://doi.org/10.1177/0278364920917446.
  2. Liu, Si, Chen Gao, Yuan Chen, Xingyu Peng, Xianghao Kong, Kun Wang, Runsheng Xu, et al. “Towards Vehicle-to-Everything
    Autonomous Driving: A Survey on Collaborative Perception.” arXiv, August 31, 2023.
    https://doi.org/10.48550/arXiv.2308.16714.
  3. Santa, José, Fernando Pereniguez-Garcia, Antonio Moragón, and Antonio Skarmeta. “Experimental Evaluation of CAM and
    DENM Messaging Services in Vehicular Communications.” Transportation Research Part C: Emerging Technologies 46
    (September 1, 2014): 98–120.
    https://doi.org/10.1016/j.trc.2014.05.006.