BRAVO Challenge

In conjunction with the Workshop on Uncertainty Quantification for Computer Vision, we are organizing a challenge on the robustness of autonomous driving in the open world. The 2024 BRAVO Challenge aims at benchmarking segmentation models on urban scenes undergoing diverse forms of natural degradation and realistic-looking synthetic corruptions.

Top teams will be required to provide their solutions (a short paragraph) to a challenge paper. The challenge paper will be included in the ECCV Workshop Proceedings.

For more information, please check the BRAVO Challenge Repository and the Challenge Task Website at ELSA.

Important Dates

All times are 23:59 CEST.

  • BRAVO Challenge 2024 launch, data and code available for download: 17/06/2024
  • Submission server open: 01/07/2024
  • Submissions deadline: 23/08/2024
  • Technical report deadline: 27/08/2024

General rules

  1. The task is semantic segmentation with pixel-wise evaluation performed on the 19 semantic classes of Cityscapes.
  2. Models in each track must be trained using only the datasets allowed for that track.
  3. Employing generative models for data augmentation is strictly forbidden.
  4. All results must be reproducible. Participants must submit a white paper containing comprehensive technical details alongside their results. Participants must make models and inference code accessible.
  5. Evaluation will consider the 19 classes of Cityscapes (see below).
  6. Teams must register a single account for submitting to the evaluation server. An organization (e.g. a University) may have several teams with independent accounts only if the teams are not cooperating on the challenge.

2. The BRAVO Benchmark Dataset

We created the benchmark dataset with real, captured images and realistic-looking synthetic augmentations, repurposing existing datasets and combining them with newly generated data. The benchmark dataset comprises images from ACDC, SegmentMeIfYouCan, Out-of-context Cityscapes, and new synthetic data.

Get the full benchmark dataset at the following link: full BRAVO Dataset download link.

The dataset includes the following subsets (with individual download links):

bravo-ACDC: real scenes captured in adverse weather conditions, i.e., fog, night, rain, and snow. (download link or directly from ACDC website)

bravo-SMIYC: real scenes featuring out-of-distribution (OOD) objects rarely encountered on the road. (download link or directly from SMIYC website)

bravo-synrain: augmented scenes with synthesized raindrops on the camera lens. We augmented the validation images of Cityscapes and generated 500 images with raindrops. (download link)

bravo-synobjs: augmented scenes with inpainted synthetic OOD objects. We augmented the validation images of Cityscapes and generated 656 images with 26 OOD objects. (download link)

bravo-synflare: augmented scenes with synthesized light flares. We augmented the validation images of Cityscapes and generated 308 images with random light flares. (download link)

bravo-outofcontext: augmented scenes with random backgrounds. We augmented the validation images of Cityscapes and generated 329 images with random random backgrounds. (download link)

Challenge Tracks

We propose two tracks:

Track 1 – Single-domain training

In this track, you must train your models exclusively on the Cityscapes dataset. This track evaluates the robustness of models trained with limited supervision and geographical diversity when facing unexpected corruptions observed in real-world scenarios.

Track 2 – Multi-domain training

In this track, you must train your models over a mix of datasets, whose choice is strictly limited to the list provided below, comprising both natural and synthetic domains. This track assesses the impact of fewer constraints on the training data on robustness.

Allowed training datasets for Track 2:


Supported by: