Hon Hai Research Institute (HHRI), a division of Hon Hai Technology Group (Foxconn), has announced that AI-enabled ModeSeq has secured the top position in the Waymo Open Dataset Challenge and was presented at the CVPR 2025, an AI and computer vision conference.
“ModeSeq empowers autonomous vehicles with more accurate and diverse predictions of traffic participant behaviors,” said Yung-Hui Li, director of the Artificial Intelligence Research Center at HHRI. “It directly enhances decision-making safety, reduces computational cost, and introduces unique mode-extrapolation capabilities to dynamically adjust the number of predicted behavior modes based on scenario uncertainty.”
HHRI's collaboration with the City University of Hong Kong resulted in the presentation of "ModeSeq: Taming Sparse Multimodal Motion Prediction with Sequential Mode Modeling" at CVPR 2025. The technology introduces sequential pattern modeling and employs an Early-Match-Take-All loss function to enhance multimodal predictions. It also uses Factorized Transformers and a hybrid architecture for scene processing. Parallel ModeSeq, a refinement of the technology, won the Waymo Open Dataset Challenge's Interaction Prediction Track at the CVPR WAD Workshop, outperforming competitors from various esteemed institutions.