Partitions are generally used to increase the performance, not to decrease performance, but you're right that if you have too many, then you will take a performance hit. It looks like you want to know how to find out how many partitions is too many.
I'm going to assume that the processing time you are talking about is the time to process the cube, not the time to query the cube.
The general idea of partitions is that you only have to process only a small subset of the partitions when you are reprocessing the cube. This makes it a huge performance enhancement. If you are processing a large number of partitions, then the overhead of processing an individual partition becomes non-negligible. The point this happens can depend on a number of factors. The factors that scale with partitions include:
- Additional queries to your data source. This cost varies greatly with your data source arrangements.
- Additional files to store the partitions.
- Additional links to the partitions.
I think the biggest factor here is how you get the data from the data source. If the partitioning is not supported well by your source, then your performance will be horrendous. If it's supported well, e.g. it has all the necessary indices in a relational database, then you only incur the overhead of individual queries.
So I think a more fitting way to ask this question is not how many partitions is too many, but how small of a partition is too small? I would say if the number of facts in a partition is in the low hundreds, then you probably have too many partitions. It's highly unlikely you will want to make that many partitions. I think the 2 billion quoted is just to assure you that you'll never get there.
Regarding whether you should have this many partitions, I don't think you should. I think you should partition carefully, making maybe a few hundred partitions, partitioning the data based on whether the data changes often or not.