Same query from a data cube sent multiple times to the DB from one dashboard

The below discussion was initially published under the “Visualizations & Dashboards” category. It has since been moved to this category, “Analytics & Data Preparation”.

I have one datacube, with public parameters and one bridge parameter (so live cube, not warehoused).

I have one dashboard, having 3 visualisations, all based on this same data cube (one shows the relation between eg. columns 1&2, next one between 2&3 and so on.). The 3 visualisations use the same set of filters.

I am surprised to see the query being issued 3 times against the DB. I would have expected to have the data loaded once, and then manipulated by Dundas in different ways according to what I set up. The queries are exactly the same (as expected).

Is that how Dundas is supposed to work (I really thought not) or am I missing something?


Hi Guillaume,

If the application is sending identical queries for different metric sets that have different combinations of elements, then it sounds like you have turned off Subquery optimization in your Manual Select transform. When you do this, you turn off any and all ability of Dundas BI to interact with and intelligently modify your query. As such it will just fire off the template query twice and then process it in the two ways that it needs for the two metric sets.

The alternative would be to turn subquery optimization back on. Once that’s done, the dashboard will still send two queries, but they’ll be different queries that have been automatically customized to the needs of each metric set by dropping unnecessary columns and performing aggregations and filtering directly within the query. This should lead to substantial performance gains.

A third option, if for some reason you want to prioritize the quantity of queries above all else is to build a single metric set that powers both visualizations, then you can just hide/show the subset of the information information you want on each of those visualizations. As long as the same parameters are hooked up to both visuals, they should at that point be serviced by a single query.

Hi Davide, thanks for your feedback.

I have one cube, that indeed has all data to power all 3 visualisations. I do want to prioritise quantity of queries because they are very heavy and can run for multiple minutes. Having 1 query returning the data for all 3 visualisations is much much faster (from the DB point of view) than having 3 queries returning only the subset of data for each visualisations.

I indeed had query optimisations off (it was required for an old driver), but I can turn it back on now. I now see 3 queries with a slightly different envelope. It is better than without optimisation, but there are still 3 queries.

I’ll experiment with building a single metric set. I am a bit confused about how it can work but I’ll give it a try.


I looked a bit more into the metric set.

Basically my data is (x, y, z, count, as well 3 hierarchies for each pair combination) and my visualisations are all bubble graphs: (x, y with count as bulle size), (x, z with count as bubble size) and (y, z with count as bubble size). I have one hierarchy per pair (x,y), (y,z) and (z,x) to actually do the aggregation.
Based on that, I cannot get my head around building one metric set that could power all visualisations: the combination left axis/bottom axis is different for all 3 visualisations, so I’m not talking about hiding or not columns.

If the visualizations require different aggregations across different hierarchies, then it may not be possible to implement them as a single metric set. This also means that querying the raw data once and then having Dundas BI process it to fit 3 different metric sets in memory would certainly be even slower than sending three different queries to the database and having it perform that processing with the benefit of indexes and other infrastructure that databases have to speed these operations up which are not available on the Dundas BI side.

Have you tried using the Dundas BI data warehouse? It seems to me that you very likely could set up the cube to pre-aggregate the data to a certain extent to match the combinations of the three hierarchies you are using, and then warehouse that result so that Dundas BI can access it using queries to the warehouse database that are far less expensive.

The query processes tens of millions of rows to then output only about 10k rows, which can serve all 3 visualisations. Having 3 queries would process 3 times tens of millions rows to output 3 times 3.3k rows. It would be nicer for Dundas, but taking the DB resources into account, the global time would be much shorter if only one query could be sent.

Warehousing is unfortunately not possible with parameterised cubes (7 parameters) - and if I were to warehouse all possible combinations, there would not be much gain in warehousing.

I thus understand from your answers that I cannot do what I want. Thanks for the help!