In statistics there is a theorem called No free lunch theorem which essentially says the performance of any two algorithms are equivalent when averaged over all possible problems which also in a way means "Dude the problem space is infinitely huge, you have tried on too few problems to infer which is better".
But to be honest in practical scenarios from my experience I have found adboost perform better than SVM in most cases.
But still for some cases sometimes people prefer to use SVM are:
1) When volume of training data is huge and computation time is a concern. That's why SVM still has a say in large scale settings.Be it boosting, deep belief network or ANN all are computation heavy compared to SVM.
2) When your production settings need you to keep things simple, you may chose a simple linear SVM both low on computation time and memory.(however keep in mind that complex non-linear kernels in SVM can eat up lot more memory than an adboost)
3)When dataset is reasonably balanced i.e. you have sufficient observations in training data for each of the labels.
In your case things which you can look to do to improve are:
1)Try out all sorts of complex kernels of SVM to see it is improving accuracy any further.
2)Try ensamble of the two.