Mapillary Vistas Instance Segmentation Task Robust Vision @ ECCV 2020

Organized by Mapillary - Current server time: Oct. 21, 2020, 10:47 a.m. UTC

Previous

validation
June 30, 2020, midnight UTC

Current

test
June 30, 2020, midnight UTC

End

Competition Ends
Never

Robust Vision Workshop @ ECCV 2020: Instance Segmentation Challenge

We are pleased to contribute the Mapillary Vistas dataset to the Robust Vision Workshop @ ECCV 2020: Instance Segmentation Task. The goal of this task is to provide instance-specific segmentation results for a subset of 37 object classes on MVD. In such a way, results allow to count individual instances of classes like e.g. the number of cars or pedestrians in an image. Details on the submission format are provided next and (mostly) follow the specification of the corresponding COCO task with some minor modifications.

This CodaLab evaluation server provides a platform to measure performance on the validation and test set, respectively. A slightly modified variant of the COCO Panoptic API is used to compute the main metric used for ranking.

The submission format is similar to the one described on the COCO dataset, however, due to the difference in naming convention of files, we adopted the following format:

[{
    "image_id" : str, 
    "category_id" : int, 
    "segmentation" : RLE, 
    "score" : float,
}]

Please note that the value for image_id is a string (and should be filled with the image filename without extension) while for the COCO dataset this is an integer. This change is due to different naming conventions used for Mapillary Vistas and COCO datasets, respectively. The category_id is a 1-based integer mapping to the respective class label positions in the config.json file, found in the dataset zip file described above. For example, class Bird is the first entry in the config file and corresponding instances should receive label category_id: 1 (rather than 0). In addition, please note that the config file contains also stuff classes, such that values for category_id are not continuously assigned from 1 to 37. The segmentation itself is stored as a run-length-encoded binary mask, and you can find helper scripts for encoding/decoding in Python or Matlab. To check the correctness of your submission format, please submit results for the validation set through the corresponding phase of this benchmark server.

All detection results should be submitted as a zipped, single json file and can be submitted to this benchmark server. Additional information can be taken from the COCO upload and result formats for detection, respectively. The main performance metric used is Average Precision (AP) computed on the basis of instance-level segmentations per object category and averaged over a range of overlaps 0.5:0.05:0.95 with inclusive start and end. A maximum of 256 object detections are considered per image.

The main performance metric used is Average Precision (AP) computed on the basis of instance-level segmentations per object category and averaged over a range of overlaps 0.5:0.05:0.95 with inclusive start and end, see here for details.

validation

Start: June 30, 2020, midnight

Description: Development phase with validation data

test

Start: June 30, 2020, midnight

Description: Competition phase with test data

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 zhouxy 0.1297