I'm working on image super-resolution tasks with EDSR as a baseline model. Following EDSR, I'm not using any batch-norm layers in my model. I suddenly came up with a stupid question about batch-sizes.
Currently, I'm training my model with batch-size=32 (as in EDSR). But since I'm not using any batch-normalization technique, I cant see any reason for using batch sizes greater than 1. But I'm not confident with my thoughts since the author's implementations are using batch sizes greater than 1.
Could someone help me with this? What am I missing?