Bagging is just an ensemble of classifiers, which all contribute to final decision. You can create ensemble using different features from your data (random forest), you can also train different models on same set of features.
In vanilla ML every record from data set is treated with the same weight. Idea behind boosting (like adaboost) is to train models iteratively and check with which records there're any problems. You're modifing weights accordingly, train next model and hope it'll do better. The idea from real world is: some records are easy, some are tough, so we're trying to train a model, which will be able to tackle both.
This is just intuitive look. There's quite few methods. Best to check docs of particular method, like xgboost.
It's also good to run them yourself on different data sets to acquire some intuitions like: vanilla SVM
will fail on data with outliers, xgb
will do just fine.