I had a similar question, it appears there are two solutions.
One is to simply let the data size round when dividing by two.
Eg. A classical transform would, starting at 8, work on data sizes 8, 4, 2.
By rounding down, starting at 13 it would go: 14, 8, 4, 2 instead of 13 (error), 7 (error), etc.
An alternative I've just found is here:
http://people.sc.fsu.edu/~jburkardt/c_src/haar/haar.html
This effectively pads the data out to the largest power of two. However the C example will also overrun the input array when it is padded like this.
As to which is mathematically correct, I don't know. It seems like the rounding approach would introduce artifacts. Whilst the padding would add extra 'features'. Perhaps these could be minimized by padding with either the last value or a mean value?