Forum Discussion
Hey Raf!
I am with Frank on this one. 🙂 Be careful with your SQL!
I think you could explain us what you try to achieve. We might guide you.
That being said, here is below the details on how the vDataRecordAll View is made of :
CREATE VIEW [dbo].[vDataRecordAll] AS
SELECT
*
FROM
DataRecord1996
UNION ALL
SELECT
*
FROM
DataRecord1997
UNION ALL
..... UNTIL
SELECT
*
FROM
DataRecord2099
UNION ALL
SELECT
*
FROM
DataRecord2100
As you can see, it is just putting the tables above eachother... nothing more.
And now you are going to ask how those DataRecordYYYY are made of ‌‌... well it is the cube data per year!
Hi FrankDK / NicolasArgente
Thanks for the feedback.
I also agree with that statement 🙂 I also know the content of the view, I was interested in getting the dimension members aligned with the IDs i see in there.
I am having some performance issues while doing allocations. I get the result, but its just far too many combinations - we are trying to drop some of the details with the client.
So I thought of bringing the data from Cube to Table (in this case i would access the source data from the table) and apply the logic with an output in a different table. As I would believe that this could run a bit quicker (done a ton of times in the past) 🙂 currently it is taking 10hrs for 12 periods, 10k+ entities. I have a breakdown by Accounts (few thousands), and Cost Centre (another few thousands).
Cheers
- FrankDK2 years agoContributor
Is this allocation using a Custom Calc (Finance Business Rule). If yes, then make sure you really go deep dive to optimize the performance on that part. Maybe test if you can gain speed using a custom parallel run.
- RafMarques2 years agoNew Contributor III
Its a Custom Calc yes and I did try Parallel Run 🙂 It runs for 27% and it breaks. But I realised the calculations already run in parallel (Coming from Data Management as Entities would be executed in Parallel because of the Data unit)
- db_pdx2 years agoValued Contributor
Hi Raf: are you using databuffers for your calculations? If not, start there. Also, do you truly mean 10k+ entities? That could imply tens of billions of unique intersections depending on the density of the data (10000 [entities] * 2000 [accounts] * 2000 [cost centers]). I would love to understand more what you're working towards; even if just to satisfy my curiosity around 'the cool things possible in OneStream'
- RafMarques2 years agoNew Contributor III
I am using Data Buffers and the calculation is huge - it finished after 11 hr running, and it generated 71 mi rows.
This is a Cost Centre transfer based on %.
Ex:
CC1 as Source goes to (Target) CC2 - 50%, and CC3 - 25%, and CC4 - 25%.
This is based on Entities, CC, and Account.
Related Content
- 11 months ago
- 9 months ago
- 9 months ago