Mapillary Vistas Panoptic Segmentation Task Robust Vision @ ECCV 2020

Organized by Mapillary - Current server time: Aug. 10, 2020, 10:56 p.m. UTC

Previous

validation
June 30, 2020, midnight UTC

Current

test
June 30, 2020, midnight UTC

End

Competition Ends
Never

Robust Vision Workshop @ ECCV 2020: Panoptic Segmentation Challenge

 

We are pleased to contribute the Mapillary Vistas dataset to the Robust Vision Workshop @ ECCV 2020: Panoptic Segmentation Task. Panoptic segmentation addresses both stuff and thing classes, unifying the typically distinct semantic and instance segmentation tasks. The aim is to generate coherent scene segmentations that are rich and complete, an important step toward real-world vision systems such as in autonomous driving or augmented reality. Things are countable objects such as people, animals, tools. Stuff classes are amorphous regions of similar texture or material such as grass, sky, road. The definition of 'panoptic' is "including everything visible in one view", and in our context panoptic refers to a unified, global view of segmentation. The panoptic segmentation task involves assigning a semantic label and instance id for each pixel of an image, which requires generating dense, coherent scene segmentations. For more details about the panoptic task, including evaluation metrics, please see the panoptic segmentation paper.

This CodaLab evaluation server provides a platform to measure performance on the validation and test set, respectively. A slightly modified variant of the COCO Panoptic API is used to compute the main metric used for ranking - Panoptic Quality (PQ).

The submission format is similar to the one described on the COCO dataset, however, due to the difference in naming convention of files, we adopted the following format:

Each per-image annotation is based on two parts: (1) a PNG that stores the class-agnostic image segmentation. The id of each object in the PNG file is linked to (2), a JSON struct that stores the corresponding, semantic information for each image segment. 

The struct of the panoptic results is shown next. Please note that image_id is a string rather than an integer as in COCO. 

annotation{
    "image_id" : str,
    "file_name" : str,
    "segments_info" : [segment_info],
}

segment_info{
    "id" : int,
    "category_id" : int,
}

The category_id is a 1-based integer mapping to the respective class label positions in the config.json file, found in the original dataset zip file. For example, an object segmented as class Bird belongs to the first entry in the config file and hence corresponding object segments should receive label category_id: 1 (rather than 0). 

Panoptic submissions have to contain exactly one json file encoding all annotations and one folder with PNGs where file names and segment indices are matching with the values and id's in the annotations, respectively. The PNGs should be located in the folder <zip_root>/name/*, where <zip_root>name.json is the corresponding JSON file. For more details please see the ground truth format for panoptic segmentation as provided in the original dataset download. Finally, all files have to be zipped into a single file with the folder structure and content described before.

A slightly modified variant of the COCO Panoptic API is used to evaluate results of the Panoptic Segmentation Challenge. The modification affects the number of maximally allowed object detections per image, which is increased to 256.

validation

Start: June 30, 2020, midnight

Description: Test phase with validation data

test

Start: June 30, 2020, midnight

Description: Competition phase with test data

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 zendelo 0.3408