I'm fitting a inhomogeneous Poisson model to a spatial point pattern dataset with function ppm
(spatstat
package) that uses Berman-Turner quadrature approximation for estimating the parameters by MLE.
By default, for computational reasons, the dummy points are arranged in a rectangular grid and the quadrature weights are determined dividing the observation window into a grid of rectangular tiles (the number of tiles is equal to the number of dummy points).
Considering a square window, the number of dummy points generated automatically is a piecewise constant function (I suppose) that depends only on the number of data points (and not on the dimension of the window); more precisely:
number of │ number of
data points │ dummy points
(intervals) │ generated
──────────────────────────────
0 - 225 │ 1028
226 - 400 │ 1604 (4*401)
401 - 625 │ 2504 (4*626)
626 - 900 │ 3604 (4*901)
901 - 1225 │ 4904 (4*1226)
etc. │ etc.
My questions
- Why was this function chosen by default in
spatstat
? Is it a kind of 'rule of thumb'? - Besides, could you point me articles or papers in which different methods and functions are compared to choose positions and number of dummy points?