Welcome aboard.
About your question, the better approach may be:
preprocessing -> train test split -> normalizing -> over/undersampling
data cleaning and preprocessing
This must be your first task, this includes removing errors from data and joining all types of data needed scattered across the company.
train test split
This must be the next to do, because of 2 things:
If you normalize the dataset before the split, you may contaminate your model training with test data information (models must be able to deal with unseen values)
Test data must be real world data, as it is, if you apply any type of sampling on this, you are changing this reality.
Normalizing
Normalizing your data before sampling is a good practice, because some sampling methods use models to generate new data points, and receiving data normalized will make a better sampling generation.
Sampling
And at last, sample your data, i recommend you to evaluate different sampling methods and sampling ratios, and compare the results.