This is not the answer to your problem but could give you a first idea what is possible and how I would approach this problem once I found a good metric to tackle this clustering problem.
Let's assume we figured out euclidean metric is well suited, then we can do the following (I use random numbers here, just for illustration)
X = np.random.randint(100, size=(20,5))
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
for i in range(2,10):
km = KMeans(n_clusters=i)
y_pred = km.fit_predict(X)
print('num_clusters: ' + str(i) + "\t" + str(silhouette_score(X,y_pred)))
Output:
num_clusters: 2 0.24318056918852374
num_clusters: 3 0.21859606573283147
num_clusters: 4 0.2320853440044738
num_clusters: 5 0.21159893083770434
num_clusters: 6 0.2436021768392968
num_clusters: 7 0.2798416731321928
num_clusters: 8 0.31839456337186695
num_clusters: 9 0.27654631385700396
The silhouette scores how well clusters fit together and how strong the overlap between the clusters is concerning a given metric. Perfect score is 1 worst is 0. So in this special case 8 clusters would fit the problem.
But keep in mind, you need to choose the algorithms appropriately to your problem so you need to have an criteria for tables being similar and for tables being totally different.