I was informed that there is not enough memory when creating the following Gaussian process model, and I would like to know if there is a feature in GPflow that allows loading data in batches instead of reading all the data at once.
Try this code
data = (X, Y) # size approximate to 1e6
gpflow.models.VGP(
data,
kernel=gpflow.kernels.SquaredExponential(),
likelihood=gpflow.likelihoods.Bernoulli(),
),
encounter OOM