SQL Database IDs for Origin and Consolidation dimension

Marcello
Contributor

Hello, 

I am trying to "translate" the system elements IDs of Origin and Consolidation dimension.

I found that they are stored in the SQL database with the following IDs.

 

SELECT distinct [ConsId]
FROM [ONESTREAM_FSK_TEST].[dbo].[vDataRecordAll]
retuns this list
176
-13
62
39
-12
57
37

SELECT distinct [OriginId]
FROM [ONESTREAM_FSK_TEST].[dbo].[vDataRecordAll]
retuns this list

-999
-33
-30
-32
-12

Can you help me reconciling the IDs with the dimension elements?

Thanks in advance

Marcello

3 REPLIES 3

JackLacava
Community Manager
Community Manager

IIRC they are bitmaps.

To be honest, I hate to be "that guy", but there are probably better ways to go about looking at your data. If you're trying to export raw data, look into the Fdx "Execute" calls, or the functions in Brapi.Finance.Data; they are more than enough to perform data extraction, as part of an extender or a Dashboard DataSet rule that can power a Data Adapter of type Method. If you're building reports, look at Cube View MD Data Adapters.

Marcello
Contributor

Hi Giacomo,

thanks for the help.

For main cube reporting I think we will use "MD Data Adapters".

Our original goal is to enhance standard Onestream reporting on journals, for which we did not find any other solution that querying directly the DB tables. But on this topic we are struggling on IDs and security issues. Do you know if there is a specific way to access journal entries details using Onestream standard query tools?

Thanks,

Marcello

 

TonyToniTone
Contributor II

I agree with Jack.  The Data Record tables are the consolidation tables and are extremely active tables.  These tables are part of the DataRecordAll view.  The DataRecordAll view is taking all the data records across over 100+ Data Record tables to join in one view.  We should never query directly against these tables or view.   There are several alternatives that you could look into to address your situation depending on your needs:

1.  Develop a Method Query Data Adapter using the JournalForWorkflowUnit Method Type.  This can be tied into Workflows to accommodate security requirements as well.  

TonyToniTone_0-1666704767006.png

2.  Build a Data Management Export Sequence with a Data Mgmt Export Data Step to store journal data in Stage.  Then you can query the data safely from the Staging tables.  

3.  Use one of the FDX functions to export data from the Cube and store in BI Blend table.  Then you can query the data safely from a BI Blend table.

4.  Use one of the FDX functions to export data from the Cube and store in a custom table other than BI Blend table or Staging tables.  

5.  Build a Data Mgmt Export Data Step and export journal data into CSV file 

6.  Create Cube View or Cube View MD Data Adapters with journal detail

7.  Leverage the JournalHeader, JournalLineItem, and Member tables to join into a Data Adapter

I would suggest that there is a proper data preparation strategy  in place for journals.  If journal reporting process requires a more enhanced version than the standard journal reporting, then find a place to organize these journal records in a way that meets your reporting requirements ( whether organizing data in new tables or dynamically creating in Data Adapters ) and optimizes query performance.