In conjunction with the Workshop on Uncertainty Quantification for Computer Vision, we are organizing a challenge on the robustness of autonomous driving in the open world. The 2024 BRAVO Challenge aims at benchmarking segmentation models on urban scenes undergoing diverse forms of natural degradation and realistic-looking synthetic corruptions.
Top teams will be required to provide their solutions (a short paragraph) to a challenge paper. The challenge paper will be included in the ECCV Workshop Proceedings.
For more information, please check the BRAVO Challenge Repository and the Challenge Task Website at ELSA.
All times are 23:59 CEST.
We created the benchmark dataset with real, captured images and realistic-looking synthetic augmentations, repurposing existing datasets and combining them with newly generated data. The benchmark dataset comprises images from ACDC, SegmentMeIfYouCan, Out-of-context Cityscapes, and new synthetic data.
Get the full benchmark dataset at the following link: full BRAVO Dataset download link.
The dataset includes the following subsets (with individual download links):
bravo-ACDC: real scenes captured in adverse weather conditions, i.e., fog, night, rain, and snow. (download link or directly from ACDC website)
bravo-SMIYC: real scenes featuring out-of-distribution (OOD) objects rarely encountered on the road. (download link or directly from SMIYC website)
bravo-synrain: augmented scenes with synthesized raindrops on the camera lens. We augmented the validation images of Cityscapes and generated 500 images with raindrops. (download link)
bravo-synobjs: augmented scenes with inpainted synthetic OOD objects. We augmented the validation images of Cityscapes and generated 656 images with 26 OOD objects. (download link)
bravo-synflare: augmented scenes with synthesized light flares. We augmented the validation images of Cityscapes and generated 308 images with random light flares. (download link)
bravo-outofcontext: augmented scenes with random backgrounds. We augmented the validation images of Cityscapes and generated 329 images with random random backgrounds. (download link)
We propose two tracks:
In this track, you must train your models exclusively on the Cityscapes dataset. This track evaluates the robustness of models trained with limited supervision and geographical diversity when facing unexpected corruptions observed in real-world scenarios.
In this track, you must train your models over a mix of datasets, whose choice is strictly limited to the list provided below, comprising both natural and synthetic domains. This track assesses the impact of fewer constraints on the training data on robustness.
Allowed training datasets for Track 2:
Supported by: