I am reading a book about Deep Learning and I am currently learning about Keras functional API in it. In the context:
"The input layer takes a shape argument that is a tuple that indicates the dimensionality of the input data. When input data is one-dimensional, such as for a Multilayer Perceptron, the shape must explicitly leave room for the shape of the minibatch size used when splitting the data when training the network. Therefore, the shape tuple is always defined with a hanging last dimension (2,), this is the way you must define a one-dimensional tuple in Python, for example:"
I did not quite understand the shape part - why is the second parameter left empty? What does keeping it empty mean? None means that it could take any shape but what is happening here? Also, about the mini batch size - isn't only one data processed at a time in a NN and with mini batch - we update the learning rate (if using sgd) after every batch of data gets evaluated with our model. Then why do we need to change the dimension of our input shape to accommodate this? - shouldn't only one data instance go at a time?