Setup
The following guide shows how to download the dataset and weights into a prepaired environment in order to train or evaluate Deep-Learning models using the GOOSE Datasets.
Workspace Preparation
Create folder for the GOOSE Dataset:
mkdir goose-dataset && cd goose-dataset
Unzip
# Step 1: Unzip all three zip files
unzip gooseEx_2d_test.zip -d gooseEx_2d_test
unzip gooseEx_2d_train.zip -d gooseEx_2d_train
unzip gooseEx_2d_val.zip -d gooseEx_2d_val
# Step 2: Create the target directory structure
mkdir -p goose-dataset/images/{test,train,val} goose-dataset/labels/{train,val}
# Step 3: Move files from each unzipped folder to the final structure
mv gooseEx_2d_test/{CHANGELOG,goose_label_mapping.csv,LICENSE} goose-dataset/
mv gooseEx_2d_test/images/test/* goose-dataset/images/test/
mv gooseEx_2d_train/images/train/* goose-dataset/images/train/
mv gooseEx_2d_train/labels/train/* goose-dataset/labels/train/
mv gooseEx_2d_val/images/val/* goose-dataset/images/val/
mv gooseEx_2d_val/labels/val/* goose-dataset/labels/val/
Unzip files to match the following structure:
goose-dataset
├── CHANGELOG
├── goose_label_mapping.csv
├── LICENSE
├── images
│ ├── test
│ ├── train
│ └── val
└── labels
├── train
└── val
Download Dataset
Read our page on the Dataset Structure to understand how the annotated datasets are organized.
License
The GOOSE Datset is published under the CC BY-SA 4.0 License.
You can directly download the preconfigured and zipped raw image, raw point cloud, and ground truth files:
GOOSE
GOOSE 2D Images
- training split (22.5 GB)
- validation split (2.9 GB)
- test split (3.4 GB, raw images only)
GOOSE 3D Point Clouds
- training split (27 GB)
- validation split (3.3 GB)
- test split (3.3 GB, xyzi points only)
GOOSE-Ex
GOOSE-Ex 2D Images
- training split (15 GB)
- validation split (1.4 GB)
- test split (2.3 GB)
GOOSE-Ex 3D Point Clouds
- training split (8.9 GB)
- validation split (0.8 GB)
- test split (1.6 GB)
Download Pretrained Weights
We provide pretrained weights for some network architectures both for 2D and 3D semantic segmentation.
Create models
folder inside the dataset root and download your preferred model.
2D Image Segmentation
We mainly evaluate our dataset on ppliteseg and ddrnet networks, which both have a good tradeoff between realtime capabilities and quality.
- PP-LiteSeg [2] uses an encoder-decoder structure with a lightweight attention-based fusion model.
- DDRNet [3] uses a typical two-stream architecture which fuses both branches at different depths within the network.
Model | Model Name [Download] | Dataset | Resolution | # Classes | mIoU / % |
---|---|---|---|---|---|
PP-LiteSeg | ppliteseg_category_512 | GOOSE-2D | 512x512 | 12 | 67.21 |
PP-LiteSeg | ppliteseg_class_512 | GOOSE-2D | 512x512 | 64 | 45.09 |
DDRNet | ddrnet_category_512 | GOOSE-2D | 512x512 | 12 | 70.23 |
DDRNet | ddrnet_class_512 | GOOSE-2D | 512x512 | 64 | 46.53 |
3D Pointcloud Segmentation
We evaluate our dataset on the ptv3 model. Weights and example code will be published soon.
References
- Peng et al. "PP-LiteSeg: A Superior Real-Time Semantic Segmentation Model" https://arxiv.org/abs/2204.02681 (2022)
- Pan et al. "Deep Dual-Resolution Networks for Real-Time and Accurate Semantic Segmentation of Traffic Scenes" in IEEE Trans. Intell. Transp. Syst. (2022)