I am working on an algorithm which subdivides a large data problem and performs work on it across many nodes. The local solution to each subdivision of the problem can be modified to match a global solution if each subdivision knows a limited amount of information about the subdivisions around it.
This can be achieved through a fixed number of communications between each subdivision, allowing for a nearly embarrassingly parallel solution.
An up-shot, however, is that, were the problem performed on a single core, each piece of data would only need to be loaded a fixed number of times, regardless of the size of the problem, to reach a solution.
Thus, the algorithm parallelizes nicely, allowing for fast solutions on supercomputers where there are sufficient nodes to hold all of the data in memory at once, but also permits very large datasets to be processed with limited resources by loading data from the disk a fixed number of times.
Is there a standard word or phrase which denotes such an algorithm that has this property?