Cascading data cubes - build errors

@ole.christian.valstad Based on Adrian’s response, I think it makes the most sense to manually trigger any cubes you want to run in parallel, for now at least. I have the same situation, so that’s what I’ll do. I have the API call each cube one at a time right now, which isn’t working (since the dependency isn’t completing before the downstream cube fires).

I think what we need is a ‘pending’ or ‘queued’ or something in the jobs list. Currently they all show up as running - if I understand Adrian properly - and that is confusing/worrying.

2 Likes

The status belongs to the job. As this is a single sequential job, its status is shared by all the cubes contained within.

I understand that is how it is implemented, but I think @david.glickman, @ken and myself are asking if in the future we can expand on this so that it is more easy to keep track of each individual data cube job within a sequential job.

If I had 10, 20 or 50++ cubes I would want to run sequentially on a schedule it would be very useful to able to see the status and the statistics for each single data cube in the job overview.

Don’t get me wrong here, the batching feature introduced in version 9 is great and I’m very happy to see Dundas put more and more functionality for data cubes - as the “ETL Layer” is great and part of what makes Dundas unique, I/we are just asking for this to evolve further :slight_smile:

2 Likes

Just to be clear:

  1. Batching can be used to run cubes manually from the UI
  2. Batching can be used to set up timed schedules
  3. But batching cannot be used to set up runs via the API

Is that right?

Just to be clear - my issue is that I trigger the cube runs via the API.

So if Cube 1 needs to run before Cube 2, then I have to call the API to trigger Cube 1, then keep the process open and waiting for it to complete (can be a long time) before I can let the API call Cube 2 to run.

That’s the problem I am trying to solve in my use case. The wait time is moderate currently, but could be prohibitive in future use cases.

All DBI functionality is API driven, so sequential jobs are API triggered as well.

I thought so, but cannot find a way to properly trigger via API. The dev console shows a post being made but it ties to a session and I’m unsure how it creates that. I can’t find anything in the documents about how to do it. And I called support on Friday and they were unsure, too. Can you assist in getting that call working from the API?

Yes, of course. I can point to the server API that can be called for the stated goal.