Data Processing and Performance - A comprehensive guide of tables, and design
Overview To maintain well performing application, one must understand how the underlying database works and more importantly its limitations. Understanding how a system works, allows designers and administrators to create reliable, stable, and optimal performing applications. This white paper is intended to guide the design of those optimal data processing strategies for the OneStream platform. First, this document will provide a detailed look at the data structures used by the stage engine as well as those used by the in-memory financial analytic engine, providing a deep understanding of how the OneStream stage engine functions in relation to the in-memory financial analytic engine. The relationship between stage engine data structures and finance engine data structures will be discussed in detail. Understanding how data is stored and manipulated by these engines will help consultants build OneStream applications that are optimized for high-volume data processing. Second, the workflow engine configuration will be examined in detail throughout the document since it acts as the controller / orchestrated of most tasks in the system. The workflow engine is the primary tool used to configure data processing sequences and performance characteristics in an OneStream application. The are many different workflow structures and settings that specifically relate to data processing and these settings will be discussed in relation to the processing engine that they impact. Finally, this document will define best practices and logical data processing limits. This will include suggestions on how to create workflow structures and settings for specific data processing workloads. With respect to data defining processing limits, this document will help define practical / logical data processing limits in relation to hard/physical data processing limits and will provide a detailed explanation of the suggested logical limits. This is an important topic because in many situations the physical data processing limit will accept/tolerate that amount of data that is being processed, but the same data may be able to be processed in a much more efficient manner by adhering to logical limits and building the appropriate workflow structures to partition data. These concepts are particularly important because they enable efficient storage, potential parallel processing and high-performance reporting/consumption when properly implemented. Conclusion Large Data Units can create problems for loading, calculating, consolidating, and reporting data. This really is a limitation of what the hardware and networks can support. Your design needs to consider this. This paper provides some options to relieve some of the pressure points that could appear. NOTE: some tables mentioned in the paper have changed in version 9+. See this note for further details.17KViews24likes0CommentsUnlocking the Power of Attributes in OneStream: Balancing Potential with Performance
OneStream offers users the ability to activate Attributes for key dimensions such as Entities, Scenarios, Accounts, and User-Defined dimensions. The process to enable Attribute Members is relatively straightforward, found within the Settings -> Attribute Member section of the User-Defined dimensions. While Attributes hold the promise of expanding the Financial Model capabilities of OneStream, they also come with a caveat - the potential to impact system performance. In this blog post, we embark on a journey through the realm of Attributes in OneStream. We will delve into the opportunities they present for enriching your financial model and dive into the challenges that may arise, particularly concerning performance issues. By the end of this exploration, you'll have a comprehensive understanding of when and how to implement Attributes effectively, ensuring that your OneStream application strikes the right balance between functionality and performance.2.8KViews13likes2CommentsThe Magic and Math of C#Top
I hear that you’ve bought the OneStream Administrator Handbook – well, well, congratulations on taking the steps that will (hopefully) make your life as an Administrator easier! The book is brimming with great examples and use cases on topics that the Administrator will likely encounter, but here’s something a little extra – a more detailed break-down on how C#Top works between the base entities to their parents! This comes in handy when an end user new to OneStream, comes to you and says something along the lines of, “Hey, the sum of the base entities doesn’t equal to the parent entity, what gives?”1.5KViews6likes1CommentCan a member have two or more different Account Types?
Have you run into the situation where a customer has a KPI or statistical account that they want OneStream to dynamically calculate for one Scenario Type and users to enter for another? If you’ve worked with OneStream’s dimension library, you know that varying Account Type by Scenario Type isn’t possible. There is, however, a workaround if the account needs to be reported for both Scenario Types on the same line. To set up our dilemma, we create an account called DynamicActInputBud. We set the Account Type and Formula Type as Dynamic Calc and Allow Input as True. The formula on this account, Return api.Data.GetDataCell("A#15000:O#Top/A#22000:O#Top"), is entered only on the Actual scenario and the Budget scenario formula is left blank. In the cube view, the formula result comes through for Actual, but the input for the Budget is not permitted. This is due to the Account Type being Dynamic Calc as it doesn’t store data to the database. So we go back and change the Account Type to Non Financial (leave the formula type as Dynamic Calc) and run the cube view again. This time the Budget allows input but the formula result doesn’t come through for Actual. If the customer is adamant about having that data on a single line that functions both as a dynamic calc as well as allowing input, you’re out of luck (as illustrated above). On the other hand, if it’s simply a reporting line and the dynamic calculation/input are completed at different points in the process, then there is a simple workaround. Basically, the solution involves setting up independent accounts for each use (Actual as Dynamic Calc and Budget as Non Financial) as well as a “pointer” account that will retrieve the value that’s sitting in each of the respective accounts. The pointer account would be set up as a Dynamic Calc with Allow Input set to False. The dynamic account (for Actual) would be set up as a Dynamic Calc with Allow Input set to False. The input account (for Budget) would be set up as a stored Account Type with Allow Input set to True. Then you reference the pointer account on the cube view and it would return the respective values of each independent account based on the scenario. 1. Set up the dynamic account (for Actual): DynamicAct Enter the Formula you want calculated on this account. 2. Set up input account (for Budget): InputBud Select whatever account type is required for your application. 3. Set up the pointer account: DynamicActInputBud On the Actual Scenario type: Return api.Data.GetDataCell("A#DynamicAct") On the Budget Scenario type: Return api.Data.GetDataCell("A#InputBud") NOTE: Be sure to write the Formula for Calculation Drill Down as well! 4. On your cube view, call A#DynamicActInputBud for the account. In the screenshot below, the account rows are: And the columns are: 5. Run the cube view and here are the results: 6. You can go one step further: to prevent entering data into the Actual Scenario on the InputBud account, add it to a Conditional NoInput rule (not shown in this blog post): This is just one example of using some basic creativity to solve for a customer’s reporting requirement.1.3KViews6likes0CommentsV9.1.1 Grid view error when updating IsIC setting
In Grid view on the Account Dimension, when attempting to update the IsIC setting for an account I receive an Internal Server Error. I'm able to update the Is IC Account on the Member Properties view, however in Grid view it returns the error below:48Views5likes2CommentsHow's your Week?
When you are designing an application, do you wonder whether or not you should be including weeks in your time dimension? This blog will look at the options available when you have a requirement for storing/reporting weekly data and answer some of the questions you should be asking yourself when deciding how to meet that requirement....1.2KViews4likes0CommentsExtensibility Series: Riding the Wave of Vertical Extensibility
What is Vertical Extensibility? Vertical or entity/cube extensibility is the transformative feature of OneStream’s Extensible Dimensionality that allows for the collapsing and extension of dimensional detail as you move up and down the entity hierarchy. This feature offers the ability to include and/or exclude detail at different levels in the entity structure, and its proper use has three major benefits: It allows for a single, unified, data model to provide the flexibility required in management reporting while maintaining a single source of truth. It allows for the management of data unit sizing to provide a highly performant end user experience. It provides flexibility and future-proofing by setting the foundation for growth in your business and the OneStream platform. Instead of needing separate, distinct, data sets to meet the various reporting needs of your business, vertical extensibility provides the ability to maintain a single linked data model. The cube structure and data model in OneStream is flexible enough to include the details where they are needed and exclude them where they are not (while still providing visibility through drill down). The main difference between horizontal and vertical extensibility is that horizontal is within the same cube across different Scenario Types while vertical extensibility is across cubes in the same Scenario Type. A common example of vertical extensibility showcasing all three of the benefits mentioned above is that of a Summary cube (sometimes referred to as a “super-cube”) containing a total company entity and various Detail cubes (or “sub-cubes”) for each of the various business units in that company. In the CPM Blueprint application, you can see this type of configuration within the FinRptg linked cube structure as diagramed below. In this example, the cubes and data models are linked through a common entity hierarchy. When we look at the entity dimension above, it’s not immediately apparent that anything is going on there. However, when we compare the cube dimension assignments for Actuals, you can start to see differences in the level of detail. In the FinRptg cube, we can see that a summary level of Accounts, UD1, and UD2 have been assigned. This will allow us to collapse the data unit down to that more summary level used in total company reporting. In the account dimension, for example, the lowest base members in the FinRptg cube would be at the NetRev, COGS, and OpEx level. When data consolidates from an entity in a sub-cube to a parent entity in the FinRptg cube, the records will be collapsed and stored in the higher dimensionality members. The lower-level base entities (ex. BU_100) and BU parent entities (ex. TotBU1) that exist in the FinBU1 and FinBU2 cubes will still have the more detailed granularity, but the TotCORP entity will only be stored in the database at that higher level. Without vertical extensibility, every distinct base intersection from both detailed cubes would also be stored for TotCORP as shown below: Applying vertical extensibility to our design allows us to collapse the data unit as it moves into the summary cube. This allows us to better manage the data unit size and keep the details where they are reported on. Applying this concept to the database records we looked at in the previous example, you can see a considerable reduction in the TotCorp entity data unit size and the amount of data we are storing in the database: Base revenue accounts collapsed into NetRev. Base geographies collapsed into a summary level (NA / EMEA). Base products collapsed into a summary product line (Product Line 1 / Product Line 2). The benefit of this is a direct reduction of the TotCORP entity’s data unit size which will lead to more snappy reporting and improved consolidation times. With the linked cube structure, we can directly drill down from TotCORP at the higher level into either TotBU1 or TotBU2 which sit in the detail cubes whenever we need to explore additional detail. Looking back at the CPM Blueprint application’s cube dimension assignment, we can also see a different application of vertical extensibility in UD3. FinBU1 does not collapse detail into FinRptg (as they both have the ProductSummary dimension assigned) but FinBU2 does. This shows the ability to vary base dimensions in each of the detail cubes. Perhaps different business units across the company use distinct cost centers, separate charts of accounts, or in this case, BU 1 may have a more detailed breakout of products than BU 2. This use of vertical extensibility allows the cubes to be more targeted to the data. This will improve the performance and end user experience by reducing sparseness. With this configuration, we can exclude members that aren’t applicable to BU 2. When designing an application, it’s important to consider future flexibility and expansion both within the business and the OneStream platform. Utilizing vertical extensibility to configure a linked cube design from the start leaves the door open by providing numerous possibilities in the future. By extending the entity dimension down from a summary cube to at least one detailed cube, you are then able to: Expand and collapse detail in separate scenario types as they consolidate up. Utilize vertical extensibility on any new dimensions activated in the future. Link additional detail cubes to the summary cube. Once Workflows have been configured and data has been loaded to a cube, you cannot add a summary cube above it. The foundation has been set and your options around flexibility and managing the data unit size have been truncated. How Should Vertical Extensibility be Applied? Vertical extensibility is the natural next step in the extensibility design process. After aligning on the levels of input each dimension should have, you should also discuss how the detail varies across business segments and what levels of detail are required for reporting. In design I find myself asking some of the following questions: Do all entities follow the same chart of accounts? Do all entities follow a similar breakout of detail (geography, products, cost center, etc.)? Are there common characteristics of entity groupings? How does corporate reporting differ from management and base entity reporting? At a consolidated level, do you need to see the lowest level of detail? The most granular GL accounts? Each individual product? Every project? Could a restructuring cause entities to move between BUs or other groupings? Planning & Preparation In terms of order, I like to start with horizontal extensibility first. Horizontal extensibility is more of a business process mapping into OneStream while vertical extensibility is more of a technical lever to drive performance, maintenance, and flexibility. Because of this, I find horizontal extensibility to be easier to understand, and I like to let the client walk before we run. I suggest having the business define input extensibility across scenario types first and then work into the reporting levels and variances across business units or other groupings. I find this exercise to be a back-and-forth refining process. Look at existing reports, analyze what needs to be produced out of OneStream, and make suggestions on the details in each cube. Solicit feedback and refine the suggestions. Bump up calculations that need to run, ad-hoc reporting, etc. to the suggestions and further refine. And finally, map out the combined vertical & horizontal extensibility in a matrix that the business can visualize and agree upon: Throughout this design process, it is helpful to mockup what the data will look like in OneStream and what will/will not be valid with this data model: Remember that alternate entity hierarchies will also need to be extended from summary into the detail cube(s). All entity structures (ex. Total Legal, Total Management, Total Tax, etc.) need to be thoughtfully planned out to incorporate extensibility and alternate entity parents placed in the proper cube. If possible and time permits, create a quick strawman and load some raw data in a OneStream application so the client can play around with it and visualize in QuickViews and drill downs. This will help further their understanding and learning journey. To reiterate, this is the time to set a healthy foundation for the future. Configuration The configuration of vertical extensibility and the linked cube structure has three main pieces: Configure the entity dimensions. Create the cubes and assign the created entity dimensions. Assign the cube references. Configuring the entity dimensions Continuing with the example in the CPM Blueprint application, we will need to configure three entity dimensions and link them by adding cross-dimensional relationships. First, we create the three entity dimensions and the desired members within each: Next, we need to add relationships to link the structure. Select the FinRptg entity dimension, click on the TotCORP entity, and follow the steps in the image below: Select “Add Relationship for Selected Member” TotCORP should be populated as the Parent Member. Select the ellipsis to the right of Child Member. Change the dimension from FinRptg to FinBU1. Select the TotBU1 entity. Select OK on the “Select Member” popup window. Select OK on the “Add Relationship” popup window. Repeat process for adding the TotBU2 entity relationship. After assigning these relationships, you will see the full linked structure in the FinRptg entity dimension. Note: The entities in black text are members of the FinRptg dimension while the entities in gray text are inherited members. Create the cubes and assign the created entity dimensions The next step is to create the cubes and assign the configured entity dimensions. Continuing with the CPM Blueprint application example, we will need to configure three cubes (FinBU1, FinBU2, & FinRptg) More information on proper cube dimension assignment can be found in the linked article. Assign the cube references The final step is configuring the cube dimensions and assigning the proper cube references to link them. Select the FinRptg cube and navigate to the “Cube References” tab. It will auto-populate based on the entity dimensions linked previously but the cube references also need to be assigned to complete the cube linking. After configuring extensibility and assigning the proper Cube References, the Cube Properties and Data Access settings may also need to be updated: Is Top Level Cube For Workflow should be set to “True” on the summary cube. Workflow suffixes should be set on the summary cube. This section will be grayed out on all linked detail cubes. Calculation settings typically align across the linked cube structure. Business Rules that need to be run in all cubes (ex. Custom Translation) should be assigned in both the summary and the linked detail cubes. Data cell security settings may need to be maintained in all cubes depending on the requirements around them (ex. UD1#200 requires data cell access security and UD1#200 exists in all cubes. Therefore, the data cell access “slice” security must be applied on all cubes in the linked structure). Recommendations & Considerations The Data Unit is a key concept to understand in OneStream. It is important to effectively manage its size to allow for optimal performance while accounting for future growth. Vertical extensibility directly affects the data unit size as you consolidate up the entity hierarchy by allowing you to vary the levels of detail assigned across business units and summary levels of the business. The mindset going into an application design should not be whether or not to use vertical extensibility but rather how it should be applied. There are very few cases where vertical extensibility should not be used because of the benefits outlined in this article. Even in cases where the same dimensions are assigned to both the summary and detailed cubes (therefore not collapsing the data unit at all), I would still utilize a linked cube structure. This foundational setup allows you to use a separate Scenario Type in the future if the data unit size or performance does become an issue as the business grows or new use-cases are brought into OneStream. When deciding how many detail cubes to include in an application design, it is important to weigh many different factors including: Are there data unit size concerns with a single detailed cube? Having a consolidated member across various BUs within a detailed cube could lead to an unmanageable data unit depending on the use of UDs and data volume. Is there a business reason to split into separate detail cubes? Maybe they have different charts of accounts or other dimensional differences (ex. Services vs Manufacturing). How often do entities move between business units? Frequent movements would be complicated between cubes so you may want to contain these in a single detail cube. However, if it’s once every three years or if you have to stretch to remember the last time it happened, the benefits may outweigh any effort to (potentially) move an entity between cubes later. What is shared across business units or other management structures vs what is distinct? If most items are shared, does it make sense to split across cubes? Consider any potential business changes as well if they may push these business units to be more aligned in the future. How do internal controls differ? How should we be thinking about cubes in relation to the security model? Applying extensibility is a balancing act. Too many dimensions and cubes can add maintenance overhead and detract from the end user experience. Too few can slow performance and also detract from the end user experience. There are typically compromises across the business during design. For example, it may make more sense to add a few accounts to a common chart to be able to share across the business, or to add some product lines to a business unit’s dimensionality even if they don’t use them. Instead of splitting these entities out into their own cubes, the complexities may outweigh the benefits. In design, it is important to outline the assumptions and reasoning behind a particular decision. List the options that were considered and the reasons they did not prove to be the best fit. With so many design levers at our disposal, it is helpful to have callbacks to the conversations that were had leading up to the final documented design. In a linked cube structure, reports in OneStream can be created using the parent cube. The concept of Merged Hierarchies allows OneStream to understand the extended entity’s data model from within the parent cube. Use “MergeMembersFromReferencedCubes” to control the extensibility level in reports. Example: A#IncomeStatement.TreeDescendantsInclusive.Options(Cube = |WFCube|, ScenarioType = XFMemberProperty(DimType=Scenario, Member=|WFScenario|, Property=ScenarioType), MergeMembersFromReferencedCubes = False) Finally, as detailed in the horizontal extensibility article, it is important to properly configure account-type dimension extensibility or else it will not consolidate up to the summary cube in a linked cube structure. When reconciling data in a detailed cube, this issue won’t present itself but when the data tries to consolidate up to a parent entity in the summary cube, it doesn’t have anywhere to go if you have not extended from a base member. For the most part, this consideration should apply to all configured dimensions but there are select use cases where it is desirable for the data not to consolidate up to the summary cube and those members not be visible to other business segments. Some examples of this include drivers specific to a business unit, dynamic calculations specific to one segment, and alternate hierarchies specific to one cube. Conclusion Vertical extensibility allows OneStream to remain a single source of truth for both corporate and management reporting. By extending dimensions down the entity hierarchy, organizations can avoid fragmented systems and empower users with the detail they need. Proper application of vertical extensibility provides: Performance benefits Flexibility to assign dimensional detail in a more purposeful way Future-proofing by leaving the door open to additional configuration options Lower maintenance overhead with a single unified data model Extensibility should be a discussion topic in every solution design. Its use is the key to unlocking the full potential of the OneStream platform.438Views3likes0CommentsUD or attribute to flag past restatements (Budget pot swaps)
Hello, we are about to start our work on next year budget and we already know that, following a recent restructuration, there will be a lot of pot swaps between departments (e.g. Licence costs used to sit in the department using the software but will now all sit under IT). Some of the them will be just moving a cost center and its history from one department to the next so that will be easy. Our issue is when we do a "pot swap" that moves part of a cost center. In this case we will have a disconnect when we compare actual and forecast unless we restate actual (i.e. also move the cost in actual). My question is on how best to tag/flag the "pot swap" in the actual data so that in reporting we can do like of like comparison (i.e. with past data restated) but also keep the original data. I have 2 potentials solutions: Flag the restatement using an UD: do the restatement of past periods under a specific UD so that we have the ability to see past periods with or without pot swap adjustments Use attributes: I think attributes would work but we have not used attributes in the past so not sure whether it is the best way Does anyone have the same issue and came up with an alternative solution? or has a view on the above options? Many thanks!31Views2likes2Comments