Hello, I am full outer joining 2 Dundas data cubes. These are warehoused using the PostgreSQL backend in Dundas. The join columns are then duplicated in the output from both tables (sku). When I use the “Merge” transform to merge into 1 sku column, it changes to a “expensive in-memory operation”. If I was using Postgres directly I would simply do a [coalesce(table1.sku, table2.sku) as sku] in the select. Is there a better way to do this with warehoused cubes without bringing everything into memory? I want to do joins again using SKU after this which I confirmed does go very slowly. My current plan is to make the first full join/merge it’s own cube and then create a new cube to join on the original cube to get around the slow joining.
You could simply deselect on the of the join columns, or simply ignore the fact that is duplicated, and just hide the duplicated hierarchy in the Result transform.
Hello, the reason I need it merged is because it’s a ‘full’ join, so sometimes table1.sku will be null and sometimes table2.sku will be null. The merge works well to combine them (it always gets me the non-null value), but wondering if there is a better way performance-wise since merge is a in-memory operation.