Show and tell on server specs?

I’m wondering if I can get some feedback about everyone’s server configuration, especially if your setup and environment is similar to mine.


My environment: We have five database sources, two oracle, two MS SQL and one postgre db. Ultimately we will have max users of 200, more likely 50 actually using it, with about 15 Power Users and 8 developers. Right now it’s more like 10 users, 1 power user and 3 developers using the system.


For Dundas - I have a test and a production instance on the same windows server. Dundas database for both instances is located on a separate MS SQL database.


We started with 2 processors and 16 gig of ram on our Dundas server like the minimum suggested and for the last six months of early development and deployment all was running relatively well. Dashboards and data cubes both seemed quick enough until I installed a new instance of V5 and since then my server has been dragging.


In the last week my Dundas server has been getting hammered by I don’t know what (and I’m working with support to try to understand) so in the meantime I have doubled the initial RAM and CPUs to 32gig and 4. Server is still dragging and one of my few users reported it took over seven minutes to get a report.


So, what is your server setup like? Please share and include as many details as possible (#users, number of data sources, #cross data source joins, etc).

Hi Joyce,

I am facing same issue on one of the installed dundas BI instances v5 and I had a ticket with support about it last week.

I noticed that the application pool was consuming the CPU, we increased the cpu to 8 and ram to 32. performace increased little bit but I still beleive that the issue was not there on v4.

Hi Mohammad,

I had a support call with Rob and Pankaj and in doing so I’ve worked out some of my issues with the test and prod version I have running with 4.0.2.1008, as long as my V5 site is off. It seemed that my problems had more to do with some of my larger and cross-database data cubes not being warehoused. I still have some weirdness with maxing out my Network traffic, but until I get something to figure out stress testing on my dashboards I'm not sure what I can do to figure out what the issue is with that.


I’ve got some reports I need to build this week, but I’m going to get back to testing the V5 hopefully by end of week and see what happens when I turn it on after importing all my current projects into it.

We have anecdotal evidence that V5 is consuming more resources on our server, as well. We have 8 cores and 192 GB of ram. We have ~280 users with around 50 of them active daily. Obviously, we are having less of an issue with this impacting users, but it does concern us for the long term. Gathering the data to back this up isn't a priority at this time for us due to several other large projects. Hopefully, there's a change coming once Dundas recognizes that there's something going on here.

I re-opened my ticket to support since my server is getting hammered, but I also have not got a hard answer on server specs for what I should have for my enviroment. We are not even at true production yet and this is not a feasible situation for me to even think about bringing our other users online.



Anyone have ideas on how to troubleshoot this? It's got 32 gig RAM and 4 processors. The only thing on this VM Windows Server 2016 is the IIS for the two instances I am running (test and prod).

Image title















The fact that you have V5 running on the server shouldn't be related to the CPU and Disk maxing out. We will gladly look into issues indicating the V5 is consuming more CPU and memory than it should but we haven't seen any evidence for that to date and most of our customers are running on V5 so I would suggest to isolate that from your troubleshooting process. In most cases when the CPU/memory are at full capacity due to a Dundas BI task it is related to a data job (i.e. a query that returns millions of records to the client side or a data cube storage job that is trying to bring millions of records to the Dundas BI server and then do calculations on the server. We are actually working on adding additional checks that will alert users about such operations in the future in order to prevent the server CPU/memory full utilization. In the meantime, I suggest to try and isolate the query or the data cube that is causing this issue. You mentioned you have a report that can take 7 minutes to load - I would recommend starting there. Looking at the data cubes that powers this report and see how much stress they add to the server. One way to do so, is to recycle your Dundas BI application pool or restart IIS (if Dundas BI is the only application running on the server). Then check that the server CPU and memory utilizations are low (in V4 from the server task manager and in V5 from the admin new landing page). Then run the storage job of the data cubes in question or the metric set powered by this data cube in case it isn't stored in-memory or in the warehouse. If indeed that data cube is the causing a high CPU/memory utilization then you should optimize it. For example, if you join 2 separate data sources such as SQL Server and Oracle, you may want to filter those as early as possible (before the join) so the join on the Dundas BI server is done with smaller datasets to begin with. Or you may even want to store the 2 sources in warehoused data cubes and then created a 3rd data cube that will join the 2 original cubes so the join can be executed on the database server rather than bringing in all data prior to the join to the Dundas BI server. There are of course other ways to optimize it - all depending on the your data modeling needs. I would recommend following this article in order to make sure that your data jobs are not taking over your server resources.

As I mentioned:

1) It is not limited to just one query.
2) I’ve been dealing with support directly for a week now.
3) I was looking for help figuring out how to troubleshoot since my server just freezes up.


Based on reading others I'm really at the low end of query results and usage of our server but given that the server keep maxing out I'm also really nervous about deploying to our full user base.

I'm starting to rethink our choice for BI.

I do not have many yet. but by the next two years it should grow (yes we will contact the account rep when we need more seats)

BTW what is this number counting (does it include groups or just Local User, Windows User, Virtual Users)

Image title


For my stats:

I am on V5.0.0.1010 of Dundas.

Azure Server

Quad 2.40 Xeon E5-2673

28 GB memory using 2.4GB

and for my needs right now it is over kill.


One day one my dev server I changed all my Warehouse storaged Cubes into In-Memory Storage as a test.

There was a small difference but the Warehouse Storage is so fast that it just was not worth changing.