Setup

The following guide shows how to download the dataset and weights into a prepaired environment in order to train or evaluate Deep-Learning models using the GOOSE Datasets.

Workspace Preparation

Create folder for the GOOSE Dataset:

mkdir goose-dataset && cd goose-dataset
Unzip
# Step 1: Unzip all three zip files
unzip gooseEx_2d_test.zip -d gooseEx_2d_test
unzip gooseEx_2d_train.zip -d gooseEx_2d_train
unzip gooseEx_2d_val.zip -d gooseEx_2d_val

# Step 2: Create the target directory structure
mkdir -p goose-dataset/images/{test,train,val} goose-dataset/labels/{train,val}

# Step 3: Move files from each unzipped folder to the final structure
mv gooseEx_2d_test/{CHANGELOG,goose_label_mapping.csv,LICENSE} goose-dataset/
mv gooseEx_2d_test/images/test/* goose-dataset/images/test/
mv gooseEx_2d_train/images/train/* goose-dataset/images/train/
mv gooseEx_2d_train/labels/train/* goose-dataset/labels/train/
mv gooseEx_2d_val/images/val/* goose-dataset/images/val/
mv gooseEx_2d_val/labels/val/* goose-dataset/labels/val/

Unzip files to match the following structure:

goose-dataset
├── CHANGELOG
├── goose_label_mapping.csv
├── LICENSE
├── images
│   ├── test
│   ├── train
│   └── val
└── labels
    ├── train
    └── val

Download Dataset

Read our page on the Dataset Structure to understand how the annotated datasets are organized.

License

The GOOSE Datset is published under the CC BY-SA 4.0 License.

You can directly download the preconfigured and zipped raw image, raw point cloud, and ground truth files:

GOOSE

GOOSE 2D Images

GOOSE 3D Point Clouds

GOOSE-Ex

GOOSE-Ex 2D Images

GOOSE-Ex 3D Point Clouds

Download Pretrained Weights

We provide pretrained weights for some network architectures both for 2D and 3D semantic segmentation.

Create models folder inside the dataset root and download your preferred model.

2D Image Segmentation

We mainly evaluate our dataset on ppliteseg and ddrnet networks, which both have a good tradeoff between realtime capabilities and quality.

  • PP-LiteSeg [2] uses an encoder-decoder structure with a lightweight attention-based fusion model.
  • DDRNet [3] uses a typical two-stream architecture which fuses both branches at different depths within the network.
Model Model Name [Download] Dataset Resolution # Classes mIoU / %
PP-LiteSeg ppliteseg_category_512 GOOSE-2D 512x512 12 67.21
PP-LiteSeg ppliteseg_class_512 GOOSE-2D 512x512 64 45.09
DDRNet ddrnet_category_512 GOOSE-2D 512x512 12 70.23
DDRNet ddrnet_class_512 GOOSE-2D 512x512 64 46.53

3D Pointcloud Segmentation

We evaluate our dataset on the ptv3 model. Weights and example code will be published soon.

References

  1. Peng et al. "PP-LiteSeg: A Superior Real-Time Semantic Segmentation Model" https://arxiv.org/abs/2204.02681 (2022)
  2. Pan et al. "Deep Dual-Resolution Networks for Real-Time and Accurate Semantic Segmentation of Traffic Scenes" in IEEE Trans. Intell. Transp. Syst. (2022)