If you are interested in using a non-parametric correlation, perhaps you can look at Kendall's tau. As a U-statistics, it is asymptotically normal. (If I am not mistaken, Spearman too is asymptotically normal, so the procedure I am about to describe is valid for Spearman as well) Hence, it suggests you can use the normal log-likelihood to perform your shrinkage.
More precisely, let tau.hat be your vectorized (estimated) correlation matrix and
L(tau | tau.hat, Sigma) = t(tau - tau.hat) Sigma^{-1} (tau - tau.hat)
where cov(tau.hat) = Sigma. L is the loss function that will allow you to know when you applied too much shrinkage (realizing that L should be roughly chi squared distributed).
This is not very helpful since you don't know Sigma, but there are ways of estimating it (for 50 stocks it might still be okay, but that explodes quickly). This is the main reason why I suggest using Kendall's tau over Spearman's rho: you can have an idea of its variance (Sigma). (See this paper for an estimator of Sigma: https://arxiv.org/abs/1706.05940)
From there, you can use a standard technique to shrink Sigma.hat (this is necessary), e.g. simply shrink it towards its diagonal version Sigma0.hat with
Sigma.tilde = w Sigma0.hat + (1-w) Sigma.hat
with, say, w = .5. At least make sure it is positive definite...
Then (finally!) get a shrunk version of tau.hat by letting q goes to 1 in tau.tilde = (1-q) tau.hat + q mean(tau.hat)
until Prob[chi_d^2 > L(tau.tilde | tau.hat, Sigma.tilde)] = .05
, where the degrees of freedom for you chi squared are (?!?!?) d = length(tau.hat) - q
.
I am not sure the degrees of freedom for the chi squared are right. Actually, I am pretty sure they are not. Note also that .05 was chosen somewhat arbitrarily. Bottom line is that you might want to take a look at the paper referred to above, as they (we, I must say) indeed shrink the Kendall correlation matrix of log-returns from 100 stocks. The way it is done allows us to know more about the degrees of freedom in question (and does not require you to provide a structure a priori as it learns a block structure from the data).