To my understanding, what she mentioned as "objects" in the video are the data points/instances of the dataset.
The problem of classical boosting mentioned in the catboost paper is prediction shift. That is, what the model learns in the training set does not reflect in the testing set. They say that the root of the problem is that each tree in the training phase is trained on the same set of data points, thus having no chances of experiencing unseen data.
For ordered boosting, a tree is trained on a subset of the data set and used to calculated residuals for another subset that it hasn't seen. Catboost obtains this by creating an artificial time, that is, a random permutation of the data.
Let's say you have ten data points from 0 to 9. Catboost will create a permutation that contains 5,0,2,1,3,6,4,9,7,8 (this is just an arbitrary permutation I came up with), and a model is trained on 5,0,2,1,3 and then used to compute residuals of 6,4,9,7,8.
This is just my own understanding, and by no means, I say it's 100% right.
Any comments and corrections are much welcome and appreciated.