-
Notifications
You must be signed in to change notification settings - Fork 175
Description
Hello,
I'm trying to run my own MMS dataset.
After a lot of pain I've managed to get out of the pit just to find a funny missing piece in trainer.py.
SensatUrban and Toronto3D both use 'segmentation' as their dataset_task, but in trainer.py we have:
def validation(self, net, val_loader, config: Config):
if config.dataset_task == 'classification':
self.object_classification_validation(net, val_loader, config)
elif config.dataset_task == 'segmentation':
self.object_segmentation_validation(net, val_loader, config)
elif config.dataset_task == 'cloud_segmentation':
self.cloud_segmentation_validation(net, val_loader, config)
elif config.dataset_task == 'slam_segmentation':
self.slam_segmentation_validation(net, val_loader, config)
else:
raise ValueError('No validation method implemented for this network type')But... object_segmentation_validation def is missing! 😄
Wondering what the intended behavior was here.
In my case, the MMS dataset is originally divided into large tiles (100+ million points each), which I chunk into smaller subtiles using a grid and a max point cap. So what goes into training are individual chunks, batch size 1.
For validation, my idea is to randomly select some of these chunks, run inference on them, and compare predictions against their ground truth labels. Would that be a reasonable approach to estimate validation loss in this context?
Thanks!