Hello,
I am currently working with nn-interactive and noticed that the default pipeline performs inference on the original 3D image resolution.
I attempted to modify the workflow by adding a preprocessing step to resample the input images to an isotropic spacing of (1, 1, 1) mm before feeding them into the model. However, after applying this resampling, the resulting segmentation masks are completely empty (all background).
Questions
Could you please provide some insight into the following:
-
Model Sensitivity: Is the pre-trained model strictly bound to the resolution/spacing distribution of the original training data? Does resampling to $(1, 1, 1)$ mm cause a significant domain shift that leads to model failure?
-
Implementation: Or is this more likely an issue with how the resampling data is being handled (e.g., coordinate alignment or tensor shape issues) within the inference pipeline?
Any advice on whether resampling is recommended or supported for this model would be greatly appreciated.
Thank you!