i wanted to ask if anyone has worked on CAMUS dataset with pytorch.Because there are a lot of things i can not understand.
At first when i run to see size of the images, i take something like that:
(1, 843, 512)
(1, 1232, 748)
(1, 779, 472)
(1, 1232, 748)
(1, 779, 472)
(1, 1232, 748)
(20, 843, 512)
(19, 1232, 748)
(24, 779, 472)
(20, 1232, 748) , where the last 4 are from the sequence.mhd files. Why is the first dimension in those so large and different? And when i try to use Unet model or the train model i usually have some problems with the dimension (i have changed my code so many times so it is not only one thing). Can it be these different dimensions the problem? And if yes, how can i solve it?
I don't know how to change the size of the images so all can be the same and in that way i can use the unet model on them and after that the training model cause until now, i always get a problem with the dimensions.