1) LabelEncoder is needed as your machine learning model can't handle strings. You need a sequential numeric label (0, 1, 2, .. n-1). But, it's only for the label part, you may use one-hot encoding or directly the numeric labels based on your model requirements.
2) StandardScalar makes your data zero-mean and unit variance.
The standard score of a sample x is calculated as:
z = (x - u) / s
where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. (scikit-learn documentation)
So, usually, it helps you to scale your data well, this might be useful for faster convergence. But, again, it depends on the ML model you are using.