In Machine Learning, Output(Response or predicted) variable of a model will depend on input(Predictor or explanatory) variable by some degree of linearity(Positive or Negative).
But in some datasets or model with multiple input variables(X1, X2, X3,X4 and X5 for example), we will see the linear relationship exists among the input variables itself. It means, X1 is correlated with X2, and X1 is correlated with X3 as well. So, X1, X2 and X3 are correlated with each other in this case and we see multicollinearity exists in this model. Note, multicollinearity explains the correlation between one input variable with another input variable (not with output variable)
Let's take housing price prediction model for clear understanding. Consider the below input variables Square Feet Size, No of bedrooms, UDS (Undivided Share in Sq. Feet) in our datasets and output(predicted) variable Housing Price
It shows that all 3 input variables are correlated to each other. How?
If no of bedrooms increases , then size of the house increases as well.
If the size of the house increases, UDS increases, there exists multicollinearity and we should resolve multicollinearity issue before the model is trained