There is no "true" answer to your questions - the approach to take highly depends on your setting, the models you apply and the goals at hand.
The topic of class imbalance has been discussed elsewhere (for example here and here).
A valid reason for oversampling/undersampling your positive or negative class training examples could be the knowledge that the true incidence of positive instances is higher (lower) than your training data suggests. Then you might want to apply sampling techniques to achieve a positive/negative class balance that matches that prior knowledge.
While not really dealing with imbalance in your label distribution, your specific setting may warrant assigning different costs to false positives and false negatives (e.g. the cost of misclassifying a cancer patient as healthy may be higher than vice versa). This you can deal with by e.g. adapting your cost function (e.g. a false negative incurring higher cost than a false negative) or by performing some kind of threshold optimization after training (to e.g. reach a certain precision/recall in cross-validation).
The problem of highly correlated features occurs with models that assume that there is no correlation between features. For example, if you have an issue with multicollinearity in your feature space, parameter estimates in logistic regressions may be off. Whether or not there is multicollinearity you can for example check using the variance inflation factor (VIF). However, not all models carry such an assumption, so you might be save disregarding the issue depending on your setting.
Same goes for standardisation: This may not be necessary (e.g. tree classifiers), but other method may require it (e.g. PCA).
Whether or not to handle outliers is a difficult question. First you would have to define what an outlier is - are they e.g. a result of human error? Do you expect seeing similar instances out in the wild? If you can establish that your model performs better if you train it with outliers removed (on a holdout validation or test set), then: sure, go for it. But keep potential outliers in for validation if you plan to apply your model on streams of data which may produce similar outliers.