For the convolutional part of the net - the input size does not really matter: the shape of the output will change as you change the input size.
However, when it comes to "InnerProduct"
layers - the shape of the weights is fixed and it is determined by input size.
You can perform "net surgery" converting your "InnerProduct"
layers into "Convolution"
layers: This way your net can process inputs at any size they come. However, your outputs will also vary in shape.
Another option is to define your net according to a new fixed input size, re-use all the learned weights of the covolutions and only fine-tune the weights of the fully connected layers.