5

I'm wondering if there are any performance implications from adding a lot of calculated members to my cube. On one hand, it's nice to have things defined once, located centrally, tested, and available for use from any client which doesn't support MDX. On the other hand, some of these members I'm adding might not be used very frequently, so I could just inline them in the one or two reports that might need them.

Aside from the clutter of having unnecessary members hanging around, should I keep the number of calculated members as small as possible? Will more increase cube processing time? Will they slow down queries which don't use those calculated members?

Dave Bauman
  • 9,252
  • 1
  • 19
  • 19

2 Answers2

10

Calculated members have little to no effect on processing nor on other queries. Add as many as you'd like!

The reason is that they're just defined on the cube, but actually evaluated at runtime. Therefore, the only queries that will be slowed or affected by them are queries that use them. Expect them to return a bit slower than native members for this reason, also.

Look for every opportunity to make the calculated member an actual part of your cube if it's used very frequently. Also, learn and love the scope statement. While a calculated member that's scoped is still calculated at runtime, the scope statement provides it a ready-made execution plan, so it tends to be faster. I will often create a member in the DSV and then scope it for my high-volume calculated members.

Eric
  • 92,005
  • 12
  • 114
  • 115
  • Isn't the entire point of a cube that all of your members are pre-calculated? Why in the world does SSAS provide such limited means of pre-calculating custom members? Sigh... – Jake Aug 03 '10 at 23:08
  • 2
    @Jake: Not *all* members. The combination of measures in the cube isn't worth storing if it's a simple calculation -- for example if you had "Retail Sales" and "Internet Sales" as measures in a cube and you wanted "Total Sales", you wouldn't want to increase the size of your facts by 33% just to keep the "Total Sales" measure -- it's more efficient to just define the calculated measure of "Total Sales = Retail Sales + Internet Sales"... – Dave Markle Mar 16 '11 at 10:33
0

Anytime you can push the calculations back into the relational model, It will increase MDX query performance; but also have a negative impact on processing performance.

If you can pre-calculate some measures using in-row sql logic, then expose these as measures in your data source view. The storage engine can build aggregates and the formula engine will have less work to do. You are basically pushing the the heavy lifting down to sql. This works really well for static calculation and conversion factors and things like simple arithmetic etc.

Another thing you can do is create any intermediate calculated members that shouldn't be used by end user as hidden, This won't have any affect on performance; but will de-clutter the cube from the end users perspective.

Jason Horner
  • 3,630
  • 3
  • 23
  • 29