I have created a multidimensional cube from a data source located on Google BigQuery cloud database. To connect the SSAS cube to BigQuery, I have utilized the CData ADO.net provider for BigQuery. The cube I created is relatively simple, consisting of one measure group and ten dimensions. I have deployed this cube on an on-premise virtual machine server.
However, I am currently experiencing performance issues during the cube full processing. The process takes more than 30 minutes to complete. The size of the fact table is approximately 200 GB, while the dimension tables are relatively smaller, with a maximum size of 200 MB.
Interestingly, to compare performance, I connected the same cube to a SQL Server database. The fact and dimension sizes remain the same as mentioned above. Both the Analysis Server and the database servers are on-premise VMs. Surprisingly, when performing the cube full process with this setup, it takes less than 8 minutes to complete.
Could you please help me understand the potential reasons behind the significant difference in performance between the cube full process in BigQuery and SQL Server? Are there any specific considerations or optimizations that need to be implemented to improve the cube processing time on BigQuery?
Any insights or suggestions you can provide would be greatly appreciated.
analysis server VM configuration - 8 CPU , 64 GB RAM
Thank you in advance for your assistance.
Note :- Query execution time on Big Query dataset is very fast, While processing the cube I have monitored query performance on Big Query, each query taking less than 10 seconds to finish.