Is anyone using DundasBI on AWS and if so, which are you using SQLRDS as you AppDb and Warehouse? Are you using Redshift as any of your data sources? Are you using S3? What has your experience been like?
we’re not doing that - sorry
I hope we use it but Unfortunately not.
We certainly have users using these technologies but it’s seldom shared with us as we typically only hear from people in these situations when they have a specific problem.
We currently have Dundas on an EC2 instance using a local PostgresDB for the AppDB and Warehouse. Back when we did the installation, the PostgresRDS version was not supported due to superuser restrictions. Not sure if this is still the case or if it applies to SQL RDS, you will need to check with support. We use Snowflake and are getting excellent and reliable performance. Redshift should be no difference. We have not had a chance to leverage S3, but as long as your bucket is on the same zone as your instance, performance should be good. We will be testing GCP soon. Let’s see how that goes, but overall, it’s been a pleasant experience with AWS.
Yes I am using Redshift to query all the data. Dundas cubes seem to have a largish overhead. Simplifying our queries as much as possible has helped with speed. We have a separate table dedicated to every report/dashboard group so that with only a small select with a few restrictions we can query all the data needed by the user. From there we go on to use the build in filtering to further filter our data. Although sometimes necessary we can used bridge parameters to alter the query with placeholders. However when doing this performance takes a hit. Redshift doesn’t like changing queries multiple times and loves to cache in a previous result that it can just keep spitting out at you which you further query afterwards.
We also use connection overrides so different users will automatically be connected to their own Redshift database. This allows us to have that multi-tenancy we need as well as tailor their reports and dashboards to them.
Aww, sorry we are not using that right now.
We’re using Lightsail with a couple instances, both with local Postgres. That seems to work pretty well with 16 GB RAM, 4 vCPUs, 320 GB SSD, Windows Server 2016. Will be transitioning to Ec2 soon to be able to provision with Ansible, and to see if there are any performance improvements with gp2 or io1 storage volume types.
I am interested in your data connection to Snowflake? and what the Host URL should look like? Thanks
With the native data provider, the URL is the same as in a browser “your_account.us-west-1.snowflakecomputing.com”. You can adjust connection parameters within Snowflake for the user set in Dundas.