UD34, UD56 and UD78 columns in DataRecords

aabirached
New Contributor II

Hello everyone,

While checking a DataRecord table in OneStream Application database, the following three columns exists:  UD34, UD56 and UD78.

Does anybody know what is the utility of these columns?

Is there any method to calculate the value that is added for each of these columns?

Thank you in advance for your help.

Regards,

 

11 REPLIES 11

Henning
Valued Contributor II

Hi,

The Data Record Tables use a Clustered Index. The columns you mention are part of that index. This is intended for more specific focus on the record tables for processing.

I am not aware of another use for them other than the index. As for how to calculate the value of each ID in the columns, what would you need that for? I am not sure whether I am actually allowed to share that information. I hope knowing that these columns are used for the Clustered Index suffices 🙂

aabirached
New Contributor II

Hello Henning,

Thank you for your fast reply 😊

I am trying to add some data to the data record directly and I was studying the column values to prepare the insert statement.

Is there any SQL function that helps me add the data without having to calculate these columns (UD34, UD56 and UD78)?

The same question for the partition ID. I understood that this column is used by OneStream’s in-memory engines to split up processing by EntityId. Is there a way to know what value I should give to this column?

Thank you in advance for your help.

Regards,

 

Henning
Valued Contributor II

Hi,

Thanks for getting back to me with your use case!

Adding data directly to the data tables using SQL is strongly recommended against. This risks system integrity as you only update this table but not all of the related e.g. audit tables. You should always create your data in a business rule, also for instance with the help of SQL and a datatable or dataset, and then save the data in the database using native OneStream functions (Apis).

Using SQL also risks leaving the solution not supported by the OneStream support if something breaks.

aabirached
New Contributor II

Thank you Henning for these information.

I will check how to do it using a business rule as mentioned.

Do you have any idea if there is a possibility to add some consolidated data using a business rule without passing by the consolidation process of OneStream?

Thank you for your help.

Regards,

Hi @aabirached,

Am I right in assuming that you are trying to migrate consolidated historical data from an existent source system into OS and that's why you need to be able to load parent level / consolidated data instead of consolidating it in OS?

If that is the case you could follow these steps:

1 - Load data into Stage

FredLucas_0-1718715798607.png

Can Load Unrelated Entities must be set to true. There is no need to assign any entity to this work flow.

The resulting load will look something like this:

FredLucas_1-1718715843940.png

Because it is stage only, loaded data is only validated against pure metadata (meaning that it will only check if a member with that name exists). This allows to load and validate parent entities too.

2 - Write a Finance rule that gets the data from Stage, converts it into databuffers and write it to the cube. See sample snippet below (in C#).

Do please read the comments as some are very important to take into account:

case FinanceFunctionType.Calculate:                 
var entityDbw = new DbWhere("EtT", DbOperator.IsEqualTo, api.Pov.Entity.Name);
var scenarioDbw = new DbWhere("SnT", DbOperator.IsEqualTo, api.Pov.Scenario.Name);
var timeDbw = new DbWhere("TmT", DbOperator.IsEqualTo, api.Pov.Time.Name);
using (var appdb = BRApi.Database.CreateApplicationDbConnInfo(si))
{
    // Here only select the columns for the dimensions you care about to avoid loading loading a bunch of empty strings.
    // Also keep in mind that the order of the columns will be respected in the data table, which indexes columns zero based. In fact later we simply use the columns ordinal, not the column name
    var df = BRApi.Database.GetDataTable(appdb, "SELECT Am,AcT,IcT,U1T,U2T FROM StageSummaryTargetData", new List<DbWhere> {entityDbw, scenarioDbw, timeDbw}, new List<DbOrderBy>(), false);    
    // Create an empty buffer
    var db = new DataBuffer();    
    // Iterate on the rows and populate a data buffer
    foreach (DataRow row in df.Rows)
    {
        // Create a new cell (essentially a record)
        var dbCell = new DataBufferCell();
        
        // Populate all the used dimensions and data (if you have a lot of data to populate this way it is better to use the column ordinal       rather than the column name)
        dbCell.SetData((decimal)row[0], DataCellExistenceType.IsRealData, DataCellStorageType.DurableCalculation);
        dbCell.SetAccount(api, row[1].ToString());
        dbCell.SetIC(api, row[2].ToString());
        dbCell.SetOrigin(api, "AdjConsolidated"); // Here we write into AdjConsolidated but you can write on import too or forms
        dbCell.SetUD1(api, row[3].ToString());
        dbCell.SetUD2(api, row[4].ToString());
        
        // Add the cell to the data buffer, aggregate if the cell is duplicated (should not happen because data would have been already aggregated in stage)                        
        db.SetCell(si, dbCell, true);
    }            
    // You can use this to debug the content of the data buffer in the error log
    //db.LogDataBuffer(api, "ParentLoadTest", 1000);        
    // Here we set the data unit destination including the target view set to YTD 
    // Bear in mind that force calculate on all the entities will un in parallel therefore if you force calculate many entities the database is going to be hit quite hard
    // Even if you use a separate application the database server is just one per environment. Keep this into consideration to avoid draining the database of resources during data import
    var povCell = new DataCellPk(api.Pov.GetDataUnitPk(), DimConstants.YTD, db.CommonDataBufferCellPk);
    var destInfo = new ExpressionDestinationInfo(povCell,povCell,false);
    api.Data.SetDataBuffer(db, destInfo);
}

break;

3 - Assign this business rule to the relevant cubes.

FredLucas_2-1718716073054.png

4 - Trigger a Force calculate, a simple way to this is via a DataManagement Step:

FredLucas_4-1718716343860.png

 

Hello Fred,

Thank you for your reply. Your assumption is right 😊

I am trying to import consolidated data from an external table and I would like to test adding these data to OneStream.

I didn't have time to test your solution yet. I will get back to you next week when I finish my tests.

Thank you again for your help.

Regards,

Henning
Valued Contributor II

Hi, am I right in assuming that this data is just a historical "data dump"? This is not part of the reconciled data for statutory consolidation I assume?

The code that has been shared here allows you to copy data to parent entities, but if data is loaded to all entities, the data from the base entities will still translate and consolidate to the parent entities. So at least, consolidation % should be set to 0 to prevent this. 

If this is indeed for statutory consolidated historical data, that is also used to provide closing balances for the starting point in OneStream so that the correct opening balances are pulled from there, I recommend not following this approach. The typical approach is to load the data to base entities and then let the system translate, calculate share and eliminate the data. Additional adjustments should be posted via journals. I know this is taking more time, but it gives control to the accountants and it ensures data integrity in the statutory actual scenario as all data is based on the same consolidation logic. This also helps validate the solution in a sense that once historical data has been reconciled in OneStream, the system works as per your requirements. 

Historical journals can be loaded directly into OneStream as well in case you have a lot of those and posting each one individually seems too time-consuming.

aabirached
New Contributor II

Hello Henning,

Thank you again for your help.

I confirm that this is just a historical static data dump. I am not aiming to use these data to provide closing balances for the starting point in Onestream.

I just need to add these consolidated amounts in OneStream for read only purposes.

Is it possible to load data directly to TOP consolidation member? 

Thank you in advance.

Regards,

Henning
Valued Contributor II

No, C#Top does not store data. Data is stored (amongst others) on C#Local. 

You could also consider using technical / historical entities for 'dumping' this data. That way you could just load all of your data to a technical base entity in OneStream which represents a parent in your legacy system. That way you could leverage built-in load functionality via the workflow. But that all of course depends on your solution and requirements.

Henning
Valued Contributor II

Hi, I am with Fred on this one. Did Fred assume your requirement / use case correctly? If not, may I ask for some further details of what you would like to achieve? Maybe we might be able to point you into the right direction. At the very least we should be able to put some more options on the table.

aabirached
New Contributor II

Yes Henning it is the requirement that I am looking for.

I will test the solution and get back to you.

Regards,

Please sign in! aabirached