Forum Discussion

osdevadmin's avatar
osdevadmin
New Contributor III
10 months ago

CubeView Performance - OneStream Dashboard Vs FDX

Hi Community, We have a cubeview that pulls more than 40k records when run for a period in a dashboard. It runs base level for couple of UDs that have ~30k members and 200 members. It renders fairly quickly on dashboard although with paging, but when for exporting the output and running using FDX via DM job (through BR) its taking a lot of time to run. I thought time it takes to run a CV manually should be same when called via Business Rule. Please let me know how can I match the cube view performance when run manually to when I call in a Business Rule. I checked and matched the sparse rows settings both in rows and columns among other checks to match but cannot figure out why it runs slower when i run through business rule.

  • Use the following filter. The Tops don't work. You can only filter for base members. If you need to filter the 'None' for UD7 and UD8 add them.

    Filter = "UD2='LE00037' AND UD3='200020' AND Account In ('GrossRevTime', 'BillExps')"

  • Henning's avatar
    Henning
    Valued Contributor II

    Hi, as you are mentioning, paging is used on the CV. This means only the displayed data on that page is pulled into memory if I remember correctly. When you use an FDX, this does not "page" the data but pulls the entire data set into memory, which is why those two cannot be compared in this case.

    You can test moving your dimensions around in the CV. Changing the order of dimensions has significantly improved performance for a colleague a while ago. 

  • JackLacava's avatar
    JackLacava
    Honored Contributor

    As Henning said, CVs use paging precisely to avoid this sort of performance issues, as well as other fine-tuning. Trying to compare them to a BR is not very useful. What will be useful, is to try and speed up your BR; there might be things you've not optimised...

    For example, FDX calls allow you to specify the level of parallelization they will employ; did you try tweaking those? If they're too low, they are going slower than they could; if they're too high, they're slowing down the server.

    Also, extracting data in memory is only half-the battle - data serialization, where you save the output to file, is likely to be slow. If your BR is saving data to file, did you try timing how long that takes, versus how long it takes for the FDX call to return? There might be ways to speed up serialization, depending on output requirements, but if you need One Big File chances are that it's always going to be fairly slow - you're outputting a lot of metadata names with those records (unlike the CV...).

  • Have you tried using FdxExecuteDataUnitTimePivot rather than FdxExecuteCubeViewTimePivot? I was able to extract circa 220K records with all the base Dims and 12 time columns using the data unit pivot output to a grid view in < 90 secs from a Dashboard Data Set BR. If you need to write to a table then you could use SQL bulk copy with more or less the same speed.

  • osdevadmin's avatar
    osdevadmin
    New Contributor III

    Thank you JackLacava Henning , i suspected paging is what might be making the CV to run very well in dashboard as it might be pulling first ~1000 or so records and displaying and as we move along pages it might be pulling the next batch of records. I used FDX to pull all 12 base periods of the year and for a parallelization of 4, it ran 4 CVs at once each taking 10% of system resources, so was little apprehensive deploying FDX with parallelization in production. I am using a datatable for storing the data as FDX returns a datatable before i run code to copy rows from datatable to a sql server table. Is there a way to use any different object for storing the data? as i didnt explored that yet. I'll try moving around dimensions in CV as well.

  • osdevadmin's avatar
    osdevadmin
    New Contributor III

    Thank you Sai_Maganti  I tried using this. I am running the following to get data from DU, but unable to get results limited to filter i have applied, is there any thing wrong how i am passing the fllter?:

    dtResults = BRApi.Import.Data.FdxExecuteDataUnit(si, cubeName, entityMemFilter, consName, scenarioTypeID.Actual, scenarioMemFilter, timeMemFilter, viewName, True, filter, parallelQueryCount, logStatistics)   

    I am passing filter like following:

     Dim lstAcct As New List(Of String)
    Dim acctArray As String() = {"GrossRevTime", "BillExps"}
    lstAcct.AddRange(acctArray)               

    Dim filter As String = "UD2='LE00037' AND Flow='Top' AND UD3='200020' AND UD5='Top' AND UD6='Top' AND UD7='None' AND UD8='None' AND Origin='Top' AND " & SqlStringHelper.CreateInClause("Account",lstAcct, True, True)

    I checked in ErrorLog filter is applied like following:

    UD2='LE00037' AND Flow='Top' AND UD3='200020' AND UD5='Top' AND UD6='Top' AND UD7='None' AND UD8='None' AND Origin='Top' AND (Account In ('GrossRevTime', 'BillExps'))

     

    • Sai_Maganti's avatar
      Sai_Maganti
      Contributor II

      Use the following filter. The Tops don't work. You can only filter for base members. If you need to filter the 'None' for UD7 and UD8 add them.

      Filter = "UD2='LE00037' AND UD3='200020' AND Account In ('GrossRevTime', 'BillExps')"