I understand the benefit of using the Data Warehouse build option for data cubes - improved performance by creating a cached table in the DBI Warehouse database. So far, so good.
In general, do I need to consider the design of my data cube and return results that are more data warehouse-like? Meaning, rather than having a table with many user-friendly varchar columns, instead have the results only return surrogate keys or integer key values? If I do it 2nd way, then creating user hierarchies for each with the surrogate/key field and a friendly description field from a dimension table? Does the number of hierarchies matter?
I guess what I’m getting at is will I realize more performance gains if my data cube’s query is more proper star-schema than many variable data type fields (and should I spend the time testing this)? Or will Dundas just warehouse whatever you throw at it (after some review it seems that this is the case)?
If I look at the steps being performed, the Job Run Details display something along the lines of the following:
- Fact table: [zd__]
-this is creating a table in the DBI warehouse database with the Results from the data cube for faster retrieval
- Creating indexes for fact table
-this creates indexes on the table created in 1
- Hierarchy ‘’, level ‘B.’: [zd_b_]
-this is caching the hierarchies included in the data cube as well.