IMHO the short answer is you can't, but also you shouldn't even try.
Clinical data is like that. Even for a scan of the same anatomical region (say pelvis), each scan (depending on clinical protocol, organization's protocols, slice thickness, technician decisions, patient symptoms, ..., ...) will have a varying number of slices.
If you try to train an algorithm based on a fixed number of slices you are guaranteed to develop an algorithm that may work for your training/test data, but will absolutely fail in real clinical use.
I would suggest you google why AI algorithms fail in clinical use so often to get an understanding of how AI algorithms developed without a) broad clinical understanding, b) technical understanding of the data, c) extensive and broad training data and d) understanding of clinical workflows will almost always fail
You could, in theory, try to normalize the data's dimensions based on anatomy your looking at, but then you need to be able to correctly identify the anatomy you're looking at, which itself is a big problem. ...and even then, every patient has different dimensions and anatomical shape.
You need to train with real data, the way it is, and with huge training sets that will cover all technical, clinical and acquisition variability to ensure you don't end up with something that only works 'in the lab', but will fail completely when it hits the real world.