OneStream - Piece by Piece
The core learning comes from first understanding the OneStream artifacts (the individual objects or components that make up the platform). A combination of these artifacts completes your application or solution (used, for example, to consolidate or plan). Or take advantage of OneStream’s solution exchange portal where there are numerous use cases that have the artifacts already pieced together readily available to download. Like any jigsaw puzzle, the box cover has the whole picture so you can see what the result will look like. Once opened you can lay out all the pieces on the table and study each individual tile to understand where each one fits in the bigger picture. Here is OneStream’s front cover of some of the artifacts: Here are the main pieces now laid out on the table: Dimensions : Classed as metadata, these are a set of related members. Each member in a dimension is an item name that labels, so to speak, the data it represents. So, if our dimension has been called Fruit, the members inside it could be named Oranges, Apples, Grapes, and Peaches, and the data for each item could point to unit sales. Dimensions are built per dimension type as follows: Entity Dimension: The organization’s business areas used for statutory or management reporting. Scenario Dimension: A version of data that can reflect various Scenario Types such as Actual, Budget, or Forecast. Account Dimension: The structure representing the organization’s chart of accounts, both financial and non-financial members. Flow Dimension: Set up to provide the movements and details on how account values change over time. User Defined (UD) Dimension: The ability to create hierarchies that can be used to analyze a report further, such as products, regions, or cost centers. Parent: Resides within the Entity dimension and provides the mechanism to further break down an entity’s business area. Intercompany: Determines which entities within the Entity dimension trade in the group and are involved with intercompany activity. Time Dimension: Data can be stored and reported at weekly, monthly, quarterly, half-yearly, and yearly levels. Consolidation Dimension: Provides the analysis of rolled-up data from its local currency to translation, share, elimination, adjustments, and final value in the parent entity’s member. Origin Dimension: Identifies the data’s origin with an import, form entry, or journal adjustment. View Dimension: Shows the data from different perspectives, for example, year-to-date, month-to-date, or quarter-to-date. Cube: A collection of relevant dimensions to form a multi-dimensional financial model that has data for analyzing and reporting. FX Rates: The currency codes used for currency exchange rates. Import: A mapping setup of the source file to the target cube for the purpose of loading data. Forms: A manual (or import option if required) way of entering data into sheets for the purpose of collating values. For example, headcount. Journals: Adjustments to the loaded data, providing governance of when and who performed the adjustment. As well as manually creating the journal, there is also an import feature that is able to create the journal using an Excel or Comma Separated Values file. Transformation Rules: The rules behind which source items map to which target items. Confirmation Rules: A developer-built data quality check feature to prevent continuation of the workflow until all is acceptable. For example, the balancing of a balance sheet. Certification Questions: Use of a questionnaire to sign off on data as acceptable. Cube Views: The main building blocks for reports and dashboards, used to display and/or enter cube data. Dashboards: Developers design dashboards to display data in a user-friendly manner and can set them to be an end-user’s landing page, Workspace in a workflow, or a series of guided reporting selections. Spreadsheets: A spreadsheet workbook directly connected to OneStream data that can be displayed and updated in real time. Report Books: A combination of different report types to form a report pack that can be distributed to stakeholders. Extensible Documents: A blend of OneStream content with Microsoft content that references OneStream data. Workflow: A guided approach for users to complete specific assigned tasks at specific times. Security: A way to permit users to only access objects relevant to their tasks in OneStream The OneStream fundamentals book extends further on all the above pieces and is the concise starter guide for anyone new to corporate performance management and specifically the OneStream world. It will take you through OneStream’s journey that will be your road map to understanding the platforms metadata, data import, calculations, workflow, reporting topics and much more … Enjoy!36Views1like0CommentsExcel Extract from SQL Table Editor
Hi Team, Is there any way to extract the SQL Table Editor to an Excel File? Currently I have a SQL table editor with multiple pages and the default extract method by right clicking on the column/rows is just extracting one page. Is there a way to extract the full table into an excel file? Thanks.4.2KViews0likes6CommentsThe Dynamic Grid Dashboard Component
Are you frustrated by volume limitations and the inability to edit data in grid view? Do you find the SQL Table Editor restrictive because it only supports a single table? Are your items taking too long to load in these grids? Discover a smarter way to load data on dashboards and grids: the Dynamic Grid dashboard component. Traditionally, OneStream grids have been limited in customizations, flexibility, data retrieval and data manipulation. The Dynamic Grid changes that traditional pattern. It allows each of those items in a more robust and enhanced way. Also, the Dynamic Grid allows for multiple data sources from a single component (however out of scope for this article). The Dynamic Grid loads only the data you need—nothing more. Dynamic Grid delivers unmatched flexibility: you can customize your grid to fit your company’s exact requirements. Enjoy traditional column formatting from the SQL Table Editor, now enhanced with powerful row and specific cell formatting options. I could go on about how powerful this new dashboard component is—but you are here to learn how to set it up. Let us dive into an example of creating a Dynamic Grid. The Dynamic Grid relies on two out‑of‑the‑box functions: GetDynamicGridData and SaveDynamicGridData. GetDynamicGridData retrieves the data for display in the grid. SaveDynamicGridData persists changes made in the grid—whether you are adding, updating, or deleting rows. Together, these functions power efficient data retrieval and safe, transactional saves for the Dynamic Grid. GetDynamicGridData Example Code: The first line— If args.Component.Name.XFEqualsIgnoreCase("Grid7") Then —acts as a conditional check to segment multiple Dynamic Grids within the same assembly. In other words, it ensures that the logic applies only to the grid named Grid7. This name must exactly match the Dynamic Grid component name defined in your dashboard configuration. Immediately after the opening If condition, you will notice two key arguments: startRowIndex and pageSize. These variables define where the table should begin rendering and how many rows appear per page. Without them, the grid would attempt to render the entire data, which would create serious performance issues. Here is what you need to know about pageSize: pageSize is driven by the Rows Per Page property on the grid. By default, Rows Per Page is set to -1, which means the grid uses your security settings to determine row limits. If you specify any value other than -1, that value overrides the security setting. The maximum allowed page size is 3,000 rows per page. Setting these arguments correctly is critical for performance and user experience. The next few lines use standard OneStream and .NET objects to open a database connection and create a command—nothing unusual there. The real magic happens in the SQL query, which leverages the lesser known OFFSET and FETCH clauses for efficient paging. These functions allow you to return only the rows you need, rather than loading the entire dataset. Alternatively, you can implement paging by generating a RowNumber column and filtering with a WHERE clause, such as: “WHERE ((RowNumber >= {StartRowNumber}) AND (RowNumber <= {EndRowNumber}))” Both approaches achieve the same goal: controlled data retrieval for better performance. Note: When building a Dynamic Grid with SQL, always follow best practices for writing SQL queries. This ensures your solution is secure, efficient, and maintainable. Notice the StartRowNumber and EndRowNumber variables integers. These are driving the SQL parameters inside of the OffSet and Fetch section of the SQL query. Using parameters is essential for pagination because it ensures your query retrieves only the requested rows while preventing SQL injection. The next section of code handles data retrieval and paging metadata: First, a new DataTable is created and populated using the SQL query for the current page of records. Then, “Dim SQL As String” defines a COUNT query to determine the total number of rows in the underlying table. A second DataTable is used to execute this count query and retrieve the result. Finally, the line: “result.TotalNumRowsInOriginalDataTable = $"{dt(0)(0)}" assigns the total row count to the XFDataTable result object. This property is critical for paging—without it, the grid cannot calculate how many pages to display. In short: no total count, no paging. Once all the arguments are in place, the final step is to create the XFDynamicGridGetDataResult object. This constructor requires three key inputs: XFDataTable – The data table you populated earlier (e.g., the result variable). Column definitions – We will cover these in detail later in the article. DataAccessLevel – Determines how users interact with the grid data. You can choose from: .AllAccess – Full read/write access (most common for editable grids). .ReadOnly – Users can view but not modify data. .NoAccess – No data interaction allowed. For most scenarios involving data manipulation, .AllAccess is the recommended setting. Note: As of now, it’s on the roadmap for paging and the ability to ascend and descend columns. This functionality is not currently available as of the writing of this article. SaveDynamicGridData Example Code: In the previous section, we focused on rendering data. Now, let us shift to saving modified data, whether inserting, updating, or deleting rows. With the SQL Table Editor, save functionality is built in. However, the Dynamic Grid offers far greater flexibility, which means you will need to implement custom-saving logic. This approach gives you full control over: How data changes are processed Custom validation rules User messaging and error handling Custom saving functionality ensures your grid behaves exactly as your business requires. Let us break down the code for the SaveDynamicGridData Function step by step. In the save routine, you will see two argument objects sourced from args: Dim getDataArgs As DashboardDynamicGridGetDataArgs = args.GetDataArgs Retrieves context from the earlier GetDynamicGridData call (e.g., paging, filters, sort)—useful if your save logic needs to reference the current view. Dim saveDataArgs As DashboardDynamicGridGetDataArgs = args.SaveDataArgs Initializes the save payload: inserted, updated, and deleted rows, plus column metadata. Next, the code gathers what is editable: Dim editedDataRows As List(Of XFDataRow) = saveDataArgs.EditedDataRows Gets the list of rows that have been modified, inserted, or deleted. This is often the entire row set, but you can scope it to specific rows when only part of the data should be editable. Dim columns As List(Of XFDataColumn) = saveDataArgs.Columns Provides the columns list (names, types, formats) for the target table—critical for validation and parameter binding. You will also see a simple table name string (e.g., Dim tableName As String = "dbo.sample_data") to identify where the changes are saved, and a Boolean flag used later to control behavior (such as enabling/disabling validation or transactional commits). Finally, the code re-opens a database connection to perform the actual writes (insert, update, delete). This is done in the save path to keep read and write operations logically separated and to ensure that any transactional logic is scoped to the save operation. Now we are moving into the fun section of code. This opens with a Loop (For Each) of all our possible edited data. Then you see the Case Statement “editedDataRow.InsertUpdateOrDelete” This lets the application know that we want to apply the insert, update or delete functions to our available edited data rows variable. There is no easy part to the modified sections. Each of them takes their own separate lines of code that are functionally SQL Statements. As you can see from my previous screenshot, I have created three separate functions to dynamically run these queries. I will post these code snippets as an appendix to this blog. For the function of InsertDataRow, this is using the standard SL Insert statement. Now we get to the save loop. The routine starts with a For Each loop over the collection of edited rows. Inside the loop, a Select Case editedDataRow.InsertUpdateOrDelete directs the logic for each row, applying the appropriate insert, update, or delete operation. There is no shortcut here, each operation requires its own parameterized SQL command and validation. In the screenshot, you will see I’ve split the logic into three dedicated functions to keep the code clean and testable: InsertDataRow(...) UpdateDataRow(...) DeleteDataRow(...) I will include these helpers in the appendix for reference. For InsertDataRow, we use a standard SQL INSERT statement with parameters (not string concatenation), ensuring safety and better performance. INSERT INTO <tableName> (Column1, Column2, Column3) VALUES (Value1, Value2, Value3) The UpdateDataRow function uses a standard SQL UPDATE statement to modify existing records. Here’s the basic pattern: UPDATE <tableName> SET column1 = value1, column2 = value2 WHERE <condition> The DeleteDataRow function uses a standard SQL DELETE statement to remove records. Here’s the basic pattern: DELETE FROM <tableName> WHERE <condition> Now that we have done all this coding magic, what’s next? First, create an empty XFDynamicGridSaveDataResult: “Dim result as New XFDynamicGridSaveDataResult()” This step is non‑negotiable: you must send data back to the gird after the save. Without it, the grid will lose context and paging will break. That’s why this line is essential: “result.DataTable = GetDynamicGridData(si, brGlobals, workspace, args)?.DataTable” . By calling GetDynamicGridData again, you ensure the save result reflects the latest filters, sort order, and paging logic—keeping the user experience consistent after modifications. With the empty XFDynamicGridSaveDataResult created and the original data table rehydrated, the next step is to restore paging, so the user returns to the correct page after a save. Compute the current page index using VB.NET integer division: “result.PageIndex = (getDataArgs.StartRowIndex \ getDataArgs.PageSize)”. Finally, before returning the result, there’s one last piece of logic, a small but important snippet that ties everything together. This step ensures the save result object is fully populated with the data, paging info, and any additional metadata the grid needs to render correctly after modifications. result.SaveDataTaskResult = New XFDynamicGridSaveDataTaskResult() With { .IsOK = True, .ShowMessageBox = True, .Message = "Save Finished" } This allows us to assign the result to the XFDynamicGridSaveDataTaskResult. However, you will notice the “With” statement. This is a cool vb.net trick that allows you to assign a series of properties or arguments inside an enclosed block of code. Simply put, you wouldn’t have to keep assigning, result.isOk = True and so forth. This is a quick way to assign several properties that are related. This is where we assign values to the XFDynamicGridSaveDataTaskResult. You’ll notice a With block—an idiomatic VB.NET construct that lets you set multiple properties on the same object without repeating the variable name. In other words, instead of writing result.IsOk = True, result.PageIndex = …, and so forth line by line, you can group them neatly inside one block. Now that we have all this code in place, you can finally return the result. We’ve covered how to render data and how to save modified rows—but the Dynamic Grid’s most used reporting feature is its ability to apply conditional column formatting. This functionality mirrors the familiar options found in the SQL Table Editor, giving you full control over how columns look and behave based on dynamic conditions. Below is a screenshot of the SQL Table Editor column formatting properties. All these formatting properties are available in the Dynamic Grid—but only through code. You can apply them directly within the GetDynamicGridData function or encapsulate them in a dedicated formatting function for better organization and reuse. In my implementation, I chose the latter approach, creating a separate function to handle conditional column formatting. Inside the GetDynamicGridData function (see screenshot above), I added the following line: “Dim columnDefinitions As List(Of XFDynamicGridColumnDefinition) = Me.GetColumnDefinitions()”. This creates a list of XFDynamicGridColumnDefinition objects, which define the properties and formatting rules for each column in the grid. The call to Me.GetColumnDefinitions() retrieves these definitions from a dedicated function, keeping the logic clean and reusable. As you can see, this entire line of code is commented out. I wanted to highlight all the possible properties. If you look at the available properties, they match one-for-one with the properties that exist inside the SQL Table Editor component. You must initialize the variable named columnDefinition6, and as we previously discussed, the With statement is used here. Each of these properties should function exactly as they do within the SQL Table Editor. Once you have set the properties, you need to add them to the collection of column definitions. This is what you see with “columnDefinition.Add(columnDefinition#)”, which adds the specific column definition to the list of all column definitions. Finally, as with every function, the return clause simply returns the final output. In conclusion, we recognize that new code‑based components can feel intimidating at first. But with the insights from this blog, we hope we’ve lowered the barrier to entry and given you the confidence to start taking full advantage of this powerful new grid. InsertDataRow Sub Code Sample: UpdateDataRow Sub Code Sample: DeleteDataRow Sub Code Sample:130Views2likes4CommentsActuals Adjustment
How to Add Adjustments with Add Back Category and Free-Form Add Back Name to Actuals in OneStream? Hi Everyone, I’m working with Actuals data structured like the attached screenshot and want to add adjustments linked to this data. Key points: Adjustments must tie to Actuals by Company, Account, and Month Each adjustment includes: An Add Back Category An Add Back Name, which is free-form text and can vary monthly I want to maintain monthly detail and enable reporting and drill-down by both Add Back Category and Add Back Name My questions: What’s the best approach to model and load adjustments with both a Add Back Category and a free-form Add Back Name in OneStream? Should Add Back Name be a UD member with auto-create, a text attribute, or something else? How to avoid member explosion or reporting complexity given the free-form nature of Add Back Name? How can I ensure adjustments roll up correctly and maintain data integrity? Any recommendations for validation, governance, and reporting for this kind of adjustment detail? Thanks in advance for your insights!19Views0likes0CommentsPass-thru Formatting on Cube Views
Good afternoon, I'm trying to pass through formatting from one cube view to another. The 2nd cube view is a dialog pop-up in a dashboard triggered by a selection changed event. So far I've been able to determine destination formatting based on Cell POV dimension members of the cell "clicked" but I'm hitting a snag on dynamically calculated cells (column is Actual versus Budget % for example). The Cell POV dimension members are the same with the only difference being a "GetDataCell" at the end. I haven't figured out a way to retrieve what, if anything, is in the "calculation script" for a cell intersection. Or if there's a way to retrieve the row and column names for a cell intersection at the time of a click? Looking for any guidance possible. Thanks, Brandon15Views0likes0CommentsStop confusion with empty parameters in Dashboards
Tired of opening your dashboard to find no pre-populated values? Frustrated when your combo boxes and list boxes don’t display a value upon runtime? This brief blog post will assist admins/super users in populating those values to avoid confusion for end users when they open a dashboard devoid of any values.1.9KViews14likes5CommentsExecuting QuickViews using BRs
Hi everyone, Does anyone know if there's a specific method to be able to run Quickviews using a Business Rule? Trying to understand if OS is capable of doing something like that. Presumably, what I would like is to execute a dashboard that would allow me to see one, just seeking for some ideas on this topic. Thank you!24Views0likes1CommentTask Manager Dashboard Configuration
I have to create a task manager build that does not include weekends in the duration. We have tried adding the weekends to holiday profiles and excluding them from the explicit day map, but the duration will always include the weekends. Right now, I have due dates on weekends an holidays even when calling them out in the Holiday Profile and setting each one with an offset of 1. Is there a work around for this or any other solution to have the durations skip weekends/ due dates not be due on weekends or holidays?115Views1like3CommentsEmbedded dynamic repeater
I have a cube view with a parameter on Entity |!Entity_select!| that I would like to put in a tabbed dashboard that has one tab for specific Entity member. I am using dashboard type "Embedded dynamic repeater" where the component is data explorer report. Data explorer report has data adapter component to the Cube View. I thought the way to do this would be to use the template parameter values in the collections. But it shows error as attached below. It seems that the template parameter value was not passed to Entity_select parameter. Is data explorer report component compatible for dashboard embedded dynamic repeater? Is there any sample of Embedded dynamic repeater dashboard using data explorer report component? TIA, Nuryana57Views0likes4CommentsPublish a Document in Narrative Reporting
Hi Team, We have an issue in publishing the document in the Narrative Reporting. while Publishing the document, it is saving under respective user folder. how to publish a document in share folder or any group folder. Can you please help us resolve the issue. Thank you, Shiva Prasad10Views0likes0Comments