In other answers there are empiric KL-divergence calculations, while we can have a closed-form solution for Beta distributions as in question.
I was not able to find snippet with KL-div for beta distibutions on the web. In the end I coded it myself.
Sharing it as it might be useful for others:
import numpy as np
from scipy import special
def kl(a1, b1, a2, b2):
"""https://en.wikipedia.org/wiki/Beta_distribution"""
B = special.beta
DG = special.digamma
return np.log(B(a2, b2) / B(a1, b1)) + (a1 - a2) * DG(a1) + (b1 - b2) * DG(b1) + (
a2 - a1 + b2 - b1) * DG(a1 + b1)