Backpropogation is an algorithm to train a neural network. During the training process, the network makes a prediction and incurs some sort of "cost" or "loss" based on what the right answer is. We want our network to adjust based on this loss, so we use backpropagation to update the individual neurons in the network in order to make a better prediction (hopefully) on the next data point.
The reason it's called backpropagation is because the algorithm starts at the end of the network, with the single loss value based on the output, and updates neurons in the reverse order, with the neurons at the start of the network updated last. The algorithm makes heavy use of the chain rule, so you can think of the gradient "propagating" backward through the network.
http://karpathy.github.io/neuralnets/ and http://cs231n.github.io/optimization-2/ are great resources for you to start with for a (much) better explanation.