We are pleased to introduce the Mapillary Vistas Panoptic Segmentation Task with the goal of advancing the state of the art in scene segmentation. Panoptic segmentation addresses both stuff and thing classes, unifying the typically distinct semantic and instance segmentation tasks. The aim is to generate coherent scene segmentations that are rich and complete, an important step toward real-world vision systems such as in autonomous driving or augmented reality. For full details of the panoptic segmentation task please see the panoptic evaluation pages here and here.
In a bit more detail: things are countable objects such as people, animals, tools. Stuff classes are amorphous regions of similar texture or material such as grass, sky, road. Previous Mapillary (and COCO) tasks addressed stuff and thing classes separately. To encourage the study of stuff and things in a unified framework, we introduce the Mapillary Vistas Panoptic Segmentation Task. The definition of 'panoptic' is "including everything visible in one view", in our context panoptic refers to a unified, global view of segmentation. The panoptic segmentation task involves assigning a semantic label and instance id for each pixel of an image, which requires generating dense, coherent scene segmentations. For more details about the panoptic task, including evaluation metrics, please see the panoptic segmentation paper.
The panoptic segmentation task is part of the 2nd Joint COCO and Mapillary Recognition Challenge Workshop - this time at ICCV 2019. For further details about the joint workshop please visit the workshop page. Researchers are encouraged to participate in both, the COCO and Mapillary Panoptic Segmentation Tasks (the tasks share [almost] identical data formats and the exact same evaluation metrics). The only difference between COCO and Mapillary submission formats is the value type for image_id for an annotation, which has to be a string providing the file name without extension rather than an integer as in COCO. Please check for compliance with the provided ground truth and/or make a trial submission using validation data.
The panoptic task uses all the publicly available Mapillary Vistas Research edition v1.1 images (18.000 train + 2.000 val) with 37 thing categories also used in the detection task, adding 28 stuff categories + 1 void class. There are no overlaps of annotations between different objects of same/different classes. The Panoptic Quality (PQ) metric is used for performance evaluation (same as for COCO), for details see the panoptic evaluation page.
This CodaLab evaluation server provides a platform to measure performance on the val and test sets. The COCO Panoptic API is provided to compute several performance metrics to evaluate panoptic segmentation.
To participate, you can find instructions on the Mapillary Vistas ICCV workshop website. Please see also corresponding COCO pages, e.g. the overview, challenge description, download, format, guidelines, evaluate, and leaderboard pages for more details.
Start: Sept. 9, 2019, midnight
Description: Test phase with validation data
Start: Sept. 9, 2019, midnight
Description: Competition phase with test data
Oct. 12, 2019, 9:29 a.m.
You must be logged in to participate in competitions.Sign In