Mapillary Panoptic Segmentation Challenge

Organized by Mapillary - Current server time: Dec. 11, 2018, 5:50 p.m. UTC

Previous

validation
June 30, 2018, midnight UTC

Current

test
June 30, 2018, midnight UTC

End

Competition Ends
Never

Mapillary Vistas 2018 Panoptic Segmentation Challenge

 

We are pleased to introduce the Mapillary Vistas Panoptic Segmentation Task with the goal of advancing the state of the art in scene segmentation. Panoptic segmentation addresses both stuff and thing classes, unifying the typically distinct semantic and instance segmentation tasks. The aim is to generate coherent scene segmentations that are rich and complete, an important step toward real-world vision systems such as in autonomous driving or augmented reality. For full details of the panoptic segmentation task please see the panoptic evaluation pages here and here.

In a bit more detail: things are countable objects such as people, animals, tools. Stuff classes are amorphous regions of similar texture or material such as grass, sky, road. Previous Mapillary (and COCO) tasks addressed stuff and thing classes separately. To encourage the study of stuff and things in a unified framework, we introduce the Mapillary Vistas Panoptic Segmentation Task. The definition of 'panoptic' is "including everything visible in one view", in our context panoptic refers to a unified, global view of segmentation. The panoptic segmentation task involves assigning a semantic label and instance id for each pixel of an image, which requires generating dense, coherent scene segmentations. For more details about the panoptic task, including evaluation metrics, please see the panoptic segmentation paper.

The panoptic segmentation task is part of the Joint COCO and Mapillary Recognition Challenge Workshop at ECCV 2018. For further details about the joint workshop please visit the workshop page. Researchers are encouraged to participate in both, the COCO and Mapillary Panoptic Segmentation Tasks (the tasks share [almost] identical data formats and the exact same evaluation metrics). The only difference between COCO and Mapillary submission formats is the value type for image_id for an annotation, which has to be a string providing the file name without extension rather than an integer as in COCO. Please check for compliance with the provided ground truth and/or make a trial submission using validation data

annotation{
"image_id": str,
"file_name": str,
"segments_info": [segment_info],
}
 
segment_info{
"id": int,
"category_id": int,
}

The panoptic task uses all the publicly available Mapillary Vistas Research edition v1.1 images (18.000 train + 2.000 val) with 37 thing categories also used in the detection task, adding 28 stuff categories + 1 void class. There are no overlaps of annotations between different objects of same/different classes. The Panoptic Quality (PQ) metric is used for performance evaluation (same as for COCO), for details see the panoptic evaluation page.

This CodaLab evaluation server provides a platform to measure performance on the val and test sets. The COCO Panoptic API is provided to compute several performance metrics to evaluate panoptic segmentation.

To participate, you can find instructions on the Mapillary Vistas ECCV workshop website. Please see also corresponding COCO pages, e.g. the overview, challenge description, download, format, guidelines, evaluate, and leaderboard pages for more details.

The COCO Panoptic API is used to evaluate results of the Panoptic Segmentation Challenge. More details can be found on the COCO challenge homepage.

validation

Start: June 30, 2018, midnight

Description: Test phase with validation data

test

Start: June 30, 2018, midnight

Description: Competition phase with test data

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In