How to Optimize Dundas Warehouse Cubes in terms of Memory Usage

Hi Dundas Community,

Happy New Year!

I have a question regarding how to optimize Dundas Warehouse Cubes in terms of Memory Usage.

We have a client that have made a bunch of cubes where all of them are Warehoused. The cubes have been mainly used as different calculation steps. All of the data is either gathered from an Excel file or a SQL server.

I have noticed that their “In Use Memory” is really big (and it should not be that big due to few data points). I suspect it is due to all the different calculations that have been made in every cube which leads to every step/calculation being queried and brought into memory. This have led to the memory almost being filled up (which it should not do) and the CPU to be really high - Which leads to a slow application.

I am going to conduct a PoC to try to optimize all of the warehoused cubes in terms of reducing unnecessary transformations so the in use memory can be reduced.

My questions is this:

  1. Does anyone know the different reasons for a high “In Use Memory” in terms of data-cubes? The client almost does not have any metric sets or dashboards yet.
  2. Does anyone know how to solve this (reduce the “In Use Memory” usage)?
  • Expect from reducing the amount of unnecessary calculations steps in the cube, reduce aggregations in the “Process Result” and run health checks?

This will be for great help! Thanks