Recent Content
Data Processing and Performance - A comprehensive guide of tables, and design
Overview To maintain well performing application, one must understand how the underlying database works and more importantly its limitations. Understanding how a system works, allows designers and administrators to create reliable, stable, and optimal performing applications. This white paper is intended to guide the design of those optimal data processing strategies for the OneStream platform. First, this document will provide a detailed look at the data structures used by the stage engine as well as those used by the in-memory financial analytic engine, providing a deep understanding of how the OneStream stage engine functions in relation to the in-memory financial analytic engine. The relationship between stage engine data structures and finance engine data structures will be discussed in detail. Understanding how data is stored and manipulated by these engines will help consultants build OneStream applications that are optimized for high-volume data processing. Second, the workflow engine configuration will be examined in detail throughout the document since it acts as the controller / orchestrated of most tasks in the system. The workflow engine is the primary tool used to configure data processing sequences and performance characteristics in an OneStream application. The are many different workflow structures and settings that specifically relate to data processing and these settings will be discussed in relation to the processing engine that they impact. Finally, this document will define best practices and logical data processing limits. This will include suggestions on how to create workflow structures and settings for specific data processing workloads. With respect to data defining processing limits, this document will help define practical / logical data processing limits in relation to hard/physical data processing limits and will provide a detailed explanation of the suggested logical limits. This is an important topic because in many situations the physical data processing limit will accept/tolerate that amount of data that is being processed, but the same data may be able to be processed in a much more efficient manner by adhering to logical limits and building the appropriate workflow structures to partition data. These concepts are particularly important because they enable efficient storage, potential parallel processing and high-performance reporting/consumption when properly implemented. Conclusion Large Data Units can create problems for loading, calculating, consolidating, and reporting data. This really is a limitation of what the hardware and networks can support. Your design needs to consider this. But from this paper, I hope you can take away some options to relieve some of the pressure points that could appear.Cube Dimension Assignment
Summary The following details offer a quick snapshot of this article’s core content and primary focus to ensure that it is most relevant to your needs. What: Cube dimension assignment When: Early build Why: Enable future flexibility How: Assign dimensions to specific Scenario Types and change “(Use Default)” to the Root dimension Overview To enable future flexibility, it is foundationally critical to properly configure the cube dimension assignments prior to loading data. Once data has been loaded to a cube, the assignments for the (Default) Scenario Type are locked in. The Root dimension assignments in the above image can be updated on the (Default) Scenario Type in the future, but any Scenario Types that have data and are set to “(Use Default)” like the image below, cannot be changed. This means that if not configured properly, the entire cube must abide by the updates to the (Default) Scenario Type. If the cube dimensions are configured properly, additional dimensions can be added to specific Scenario Types in the future. The example use case illustrated in this guide is adding a customer dimension in Budget to expand the annual planning capabilities. This guide provides example configurations to illustrate the recommended approach and common misconfigurations. Recommendation When a cube is created, dimension assignments on specific Scenario Types are set to “(Use Default)” on the Cube Dimensions tab. To properly configure an application for extensibility and enable data model flexibility/expansion in the future, these settings should be updated for the active Scenario Types within each cube. (Default) Scenario Type: Assign the Data Unit dimensions of entity and scenario. Leave all non-Data Unit dimensions as Root. For all active Scenario Types: Entity and scenario will remain as “(Use Default).” All non-Data Unit dimensions should be assigned a specific dimension. Select Root for all unused dimensions. “(Use Default)” should not remain for any dimension. Leave inactive Scenario Types as-is until ready to be activated. Use Case & Examples Use Case: A client with a live OneStream application wants to enable a customer dimension in Budget to expand their annual revenue planning capabilities and include their top customers. Data has already been loaded to Actual and a prior year Budget. Configuration #1: Recommended Configuration The recommended configuration of cube dimension assignments will enable the application to take full advantage of extensibility. This configuration will allow the addition of new dimensions to specific Scenario Types in the future and eliminate the need to “stub out” unused dimensions for future use. To configure properly, the Data Unit dimensions (entity and scenario) will be assigned to the (Default) Scenario Type, and all remaining active dimensions will be assigned to their respective active Scenario Types. Any inactive dimensions should be set to "Root” instead of “(Use Default)”. Recommended initial assignment is as follows: (Default) Actual Budget After the Actual and Budget Scenario Types both have data in them, we are still able to change our UD4 Dimension assignment in the Budget Scenario Type to include our new summary customer dimension. **Be aware that once you hit save, the new UD4 assignment will be locked in, and you will be unable to change it if there is data in that cube and Scenario Type combination. Changing from a Root dimension is a one-time change that cannot be reverted if there is data in this cube and Scenario Type combination. ** After adding the new dimension to the Budget Scenario Type, one will see the history in UD4#None, and the new dimension members active for input in subsequent budget cycles. Since it was assigned to the specific Scenario Type and not (Default), you will notice that this new UD4 dimension is invalid for the Actual Scenario Type. This configuration will also allow the future addition of UD5 and UD6 dimensions by following these same steps. Configuration #2: Improper Assignment to the (Default) Scenario Type A common error is to assign all dimensions to the (Default) Scenario Type and only use the Scenario Type-specific tabs for those that differ. This configuration will work and will also allow you to add additional dimensions in the future but is much less flexible. Additional dimensions in this setup must be assigned to the (Default) Scenario Type and will therefore apply to all active scenarios. In the example below, all active dimensions have been assigned to the (Default) Scenario Type and a different Account dimension has been assigned to the Budget Scenario Type to enable the use of extensibility. The remaining non-Data Unit dimensions have been left as (Use Default) for both the Actual and Budget Scenario Types: (Default) Actual Budget In this setup, attempting to assign our new customer dimension to UD4 on the Budget Scenario Type: Will display this error: Due to the use of (Use Default) on the active Scenario Types, they are now locked into whatever the (Default) Scenario Type has set for these dimensions and they cannot be updated. To add our new customer dimension, one is forced to assign it to the (Default) Scenario Type: When assigning to the (Default) Scenario Type, you will notice that it works for Budget as required (same as the recommended configuration), but it is now active for the Actual Scenario Type as well which was not the desired result. With this configuration, existing business rules and member formulas will need to be validated throughout the application to ensure the right intersections are specified. This additional dimension contains valid intersections in all Scenario Types; therefore, rules need to be more explicit in their filtering and writing of data. If written improperly or too open, this new dimension may cause a performance impact or zeros and other bad data being calculated in these new intersections. Configuration #3: Improper Assignment of (Use Default) Another erroneous configuration is to assign all dimensions to their respective Scenario Type but leave unused dimensions as (Use Default). This configuration will also work and will allow you to add additional dimensions in the future, but also is not as flexible as the recommended setup. Additional dimensions in this setup must be assigned to the (Default) Scenario Type and will therefore apply to all active scenarios. In the example below, all active dimensions have been assigned to their respective Scenario Types. The remaining inactive dimensions have been left as (Use Default) for both the Actual and Budget Scenario Types: (Default) Actual Budget In this setup, attempting to assign our new customer dimension to UD4 on the Budget Scenario Type will result in the same error as configuration #2 above: Again, forcing the assignment to the (Default) Scenario Type which will apply to all active Scenario Types with the setting of (Use Default) for UD4. This assignment will also work for Budget, but as with configuration #2 above, you will notice that it is now active for the Actual Scenario Type as well which was not the desired result. As with configuration #2 above, existing business rules and member formulas should be validated throughout the application to ensure the right intersections are specified. Additional no input rules may be necessary to limit input to these intersections in Scenario Types that do not apply. Considerations The recommended configuration for cube dimension assignment eliminates the need to “stub out” unused dimensions for future use. If unused dimensions are assigned to “Root” on their respective Scenario Types, they can be changed in the future. One should not create a placeholder dimension for those that are unused (UD4, UD5, and UD6 in our example above) as this will only limit future flexibility. If additional dimensions are not configured properly, you can only update from Root to a specific dimension once. If you accidentally save an incorrect dimension update, you’re locked into that change. Plan and make sure these settings are properly updated before saving the changes. Despite adding flexibility for the future, configuring the cube dimensions in this way still does not allow you to change active dimensions with data. Conclusion As you can see from the examples above, non-Data Unit dimensions should be assigned to the cube by Scenario Type with (Use Default) changed to the Root dimension for those that are inactive at setup. Assigning non-Data Unit dimensions to the (Default) Scenario Type and leaving (Use Default) on the specific Scenario Types will limit the benefits and flexibility provided by extensibility. Improper setup will force the entire cube to conform to future updates to the (Default) Scenario Type. Conversely, assigning non-Data Unit cube dimensions to specific Scenario Types and utilizing the Root dimensions instead of (Use Default) will open additional growth opportunities for the application. This recommended configuration is also more flexible than “stubbing out” dimensions for future use as you do not have to consider the potential pitfalls related to that design.2KViews13likes0CommentsIs it possible to wrap text in a column header and change its width within a Cube View?
Answer Text wrapping is not currently available within a Cube View itself. This functionality is offered in Excel or within a grid view component showing next to the data explorer then set word wrapping on the grid view within a dashboard. Source: Office Hours 2020-05-17 Partner Enablement1.6KViews11likes0CommentsExtender: Auto Update Member Property
This snippet will modify a Member property that can vary by Scenario Type and/or Time. Just pass the relevant ScenarioType ID or Time member ID to set it in a more specific way; it will then appear as a "Stored Item" in the interface. Note: SaveMemberInfo does not create entries in Audit tables, which means the Audit Metadata report will not contain anything related to this operation. For this reason, we do not recommend to use this snippet outside of implementation activities or in production environments. 'Get the MemberInfo object for the member you want to update, in this example an Account. Dim objMemberInfo As MemberInfo = BRApi.Finance.Members.GetMemberInfo( _ si, DimType.Account.Id, "<Member Name>", True) ' Retrieve member properties so we can modify them. Dim accountProperties As AccountVMProperties = objMemberInfo.GetAccountProperties() ' change the Account Type accountProperties.AccountType.SetStoredValue(AccountType.Revenue.Id) ' change default Text1 value ' if you want to set it for a specific ScenarioType and/or time, ' use the relevant values in the first 2 parameters accountProperties.Text1.SetStoredValue( _ ScenarioType.Unknown.Id, DimConstants.Unknown, "<UpdatedValue>") 'Save the member and its properties. Dim isNew As TriStateBool = TriStateBool.TrueValue BRApi.Finance.MemberAdmin.SaveMemberInfo(si, objMemberInfo, False, True, False, isNew)Filter IC Dimension by Entity Property
A common requirement for reporting is to be able to filter the IC dimension by some property that exists only on the original Entity members. This can be achieved with a custom Member List defined in a Finance business Rule. Select Case api.FunctionType ' MemberListHeaders support is optional but good practice Case Is = FinanceFunctionType.MemberListHeaders Dim mListHeaders As New List(Of MemberListHeader) ' add the name of your list: mListHeaders.Add(New MemberListHeader("withText1")) Return mListHeaders ' Here we do the real work Case Is = FinanceFunctionType.MemberList If args.MemberListArgs.MemberListName.XFEqualsIgnoreCase("withText1") Then ' this list of members will be populated later Dim ICs As New List(Of Member) ' amend parameters as necessary here Dim dimensionName as String = "CorpEntities" Dim memberFilter as String = "E#Root.Base.Where(Text1 <> '')" ' filter the Entity dimension by some criteria Dim entities As List(Of MemberInfo) = brapi.Finance.Members.GetMembersUsingFilter(si, _ brapi.Finance.Dim.GetDimPk(si, dimensionName), _ memberFilter, _ True) ' retrieve IC members corresponding to the selected Entity members ' and push them into output list For Each entityMInfo As MemberInfo In entities if entityMInfo.getEntityProperties().isIC then ICs.Add(brapi.Finance.Members.GetMember(si, dimtypeId.IC, entityMInfo.Member.Name)) end if Next ' wrap with the MemberList object and return Return New MemberList(New MemberListHeader("withText1"), ICs) This can then be referenced in CubeViews and elsewhere like this:Extensibility Series: Dive into Horizontal Extensibility
What is Horizontal Extensibility? Horizontal extensibility is the game-changing technology in OneStream that allows for sharing and inheriting metadata across Scenario Types. With this ability, the business processes can dictate application design instead of vice versa. Within a OneStream cube, we can assign differing base members to each Scenario Type while sharing all relevant parent members. This means, not only can actuals, budgets, forecasts, long-range plans, detailed data sets, and more be contained within the same cube, but their proper level of inputs can be incorporated into the model as well. In legacy corporate finance software landscapes, it is common to see some of the following: Dummy input members to key/load data at different levels in a hierarchy Separate cubes or disparate systems with slight variations of the same hierarchies Multiple versions of the same common reports Duplicate data residing in multiple tools to facilitate the separate needs of each process Inconsistent data tie out points across systems causing skepticism and confusion In OneStream, we can create a single common set of master metadata and through horizontal extensibility, the input levels can follow each distinct process needs. The most common example of this is a budget or forecast that is completed at a more summary level than the actual financial data. In the CPM Blueprint application, you can see this type of extension in the account dimension. In this application and shown in the image below, Net Revenue is the base input member for budgeting and forecasting while actuals are captured at a lower granularity. No longer do we need to maintain separate charts of accounts, create dummy input members, or store input data in separate systems to accomplish this. We don’t even have to maintain separate reports. All these needs from input to reporting can be maintained in a single OneStream cube with commonalities shared. The example above is within the account dimension, but to understand and unlock the full potential of horizontal extensibility, we will look at how this concept can be applied across multiple OneStream dimensions. In the CPM Blueprint application, we can see this use of horizontal extensibility when we look at the LongTerm Scenario Type. The long-range planning process in this application takes place at a more summary level. Like budgeting and forecasting, the account input is at the Net Revenue level. However, in the LongTerm Scenario Type, input into the Geography and Product dimensions (shown below) vary from actual, budget, and forecast. The ability to utilize extensibility in these ways unlocks so many possibilities when designing scenarios, cubes, and global applications. Using horizontal extensibility across Scenario Types allows us to make the user experience more targeted by adding/removing members and entire dimensions from Scenario Types where they are not valid. The examples above focused on processes occurring at different levels in the same hierarchies, but what if an entire dimension is not valid? An example of this in the CPM Blueprint application is the Cash Flow dimension. Below, you can see this dimension assigned to the Actual Scenario Type meaning all Cash Flow members are included and valid in scenarios created with that Scenario Type. However, in the active planning Scenario Types (Budget, Forecast, LongTerm), the “Root” dimension is assigned on the cube properties meaning only the None member is valid. Organizing the cube to be more targeted in this way will reduce confusion among end users and limit potential data unit explosion caused by rogue rules, imported zeros, or other configuration issues. Business rules without proper filtering and removal of zeros can potentially assess and/or write to significantly more intersections than intended.Additionally, configuring the cube properties correctly by assigning the Root dimension for those that are inactive on a Scenario Type will allow you to update and add new dimensions in the future. An example of proper cube dimension assignment is shown in the next section. How Should Horizontal Extensibility be Applied? Applying horizontal extensibility follows three main steps: Planning & Preparation Configuration Cube Assignment Planning & Preparation Every project should begin with a design. Measure twice, cut once. Start with a list of processes and data sets that are planned to be incorporated into OneStream. Next, list out the various reporting dimensions that are used in each. Finally, create a grid with the processes in the columns, the dimensions in the rows, and fill in where they are valid: This chart can start to help us see that actuals and long-range planning should utilize separate Scenario Types (to vary the inclusion/exclusion of entire dimensions), but it does not give us clarity into the differences between the shared dimensions. In this example, accounts are needed for all four of the processes, but we don’t yet know which accounts are needed in each. At this point, I like to compile a complete list of all possible members and hierarchies and go through the exercise of determining where they should be valid like the chart below: After this exercise has been completed for accounts, it should be repeated for each reporting dimension identified to get a full understanding of each data set. Based on the above account-level breakout between the four different processes, we have some decisions to make. The existing process is set up as shown above, but should it be that way in OneStream? Should we create separate account extensions for both budget and forecast? Or do we potentially see the inclusion of Balance Sheet information in the forecast? Are there any initiatives to budget and/or forecast at a different level than currently? Is the business happy with the detail in which the long-range plan is generated? This is the time to avoid lift-and-shift mentality and set the foundation for future growth in the platform. Really discuss the possible extension points in each dimension and come to a decision as a business. This is a key design decision. Configuration After discussing internally with all stakeholders and making decisions around horizontal extensibility, it is time to configure the dimensions and identify any issues with the extension points. After creating a summary dimension at the highest identified level in the member structure, the extended dimensions will utilize the Inherited Dimension property on creation. In the newly created dimension, we will notice the inherited members in gray and we can begin to add in the extended members. In most cases, we recommend extending from base members only. Vertical extensibility will be covered in another article, but when we look at consolidating data between linked cubes, extending from parent members causes the data to not consolidate from sub cube to parent cube. It should be noted that the Entity dimension does not follow this same restriction. In the image below, you can see an account extension that does not follow our recommendation to extend from base members on the left (Example A) and an account extension that does follow our recommendation on the right (Example B). In Example A, two issues have been identified: 1) Five base accounts have been extended from the parent OtherOpExp. You can see this difference via the black & gray text in the account dimension library. accounts 541000, 541100, and 541950 are in the gray text signifying they have been inherited from the parent dimension while the five base accounts from 541200 - 541600 are in the black text signifying they have been created in the currently selected dimension. This is an example of extending from a parent member and it will not consolidate correctly across linked cubes. To resolve this configuration issue, it is common to see one of three solutions: All the base members under OtherOpExp are included in the parent dimension and then inherited into the extended dimension. Input in the parent dimension would be to 541950 - Other or one of the other base members. Only the member OtherOpExp is included in the parent dimension and all child accounts in the extended dimension. Input in the parent dimension would be aggregated to an OtherOpEx level without the breakout of 541000, 541100, & 541950. A new parent member would be created in the summary dimension to extend from. An example of this is shown below. This facilitates the desired split of base members between dimensions but may cause confusion among end users when viewing the hierarchy. The risks of this can be mitigated with proper end user training and documentation. Each of these solutions comes with pros & cons that should be weighed and discussed by the business. 2) The parent accounts TravelEntExp and HRExps are extended from the parent account OpEx. This is an example showing that even though the base members do not have siblings in the inherited dimension, the parents cannot have siblings in the inherited dimension either. All members in the extended dimension, both parent and base, should be extended from a base member. Example B shows the common solution for this configuration with the Travel and HR expense parents included in the parent dimension and all base members in the extended dimension. However, a similar approach with the _EXT parent member could be applied here as well. While discussing these extension points and deliberating whether to move members up/down a dimension or add _EXT parent members to apply extensibility properly, the future state goals should be considered. If the summary dimension you are creating is to facilitate the forecast process today, but that process does not include details around travel and HR expenses, might it in the future? Should you include these members and grow into their use? Is there a roadmap to expand your planning capabilities in these areas? Moving parent members between dimensions later can be accommodated, but moving base members is more difficult. The recommended practice is to extend from a base member, but there are some outlying use cases where extending from parent members is acceptable: If the intent is to not consolidate the data up the linked cube structure. If there is no linkage between cubes and the intent is to limit the members visible to end users. If it is in the entity dimension. As a reminder, the example shown was of the Account Dimensions, but this also applies to Flow Dimensions and User Defined Dimensions. To aide in the process of creating and validating extensible dimensions, the utility “Extensibility Relationship Analysis” on Solution Exchange can help identify potential issues with parent-child relationships. Cube Assignment After we have designed and created our account dimensions in an extensible fashion, we need to assign them to the cube. Properly applying dimensions on the cube settings is critical to unlocking the flexibility that horizontal extensibility provides. Non-data unit dimensions (Account, Flow, UD1 - UD8) should be applied on the specific Scenario Types that are in use. Any unused dimensions on those active Scenario Types should be assigned to the “Root” dimension (Ex. RootUD4Dim in the image below). Assigning the Root dimension instead of (Use Default) allows for that dimensional assignment to be changed a single time later and activated for data input on a go-forward basis. More information on proper cube dimension assignments can be found in the linked article. Recommendations & Considerations Horizontal extensibility is more about providing flexibility and data model integrity and less about managing data unit sizes. Yes, it can shrink the potential data unit size and mitigate the impact of a rogue calculation, but the main focus is on creating a single source of truth by allowing for a single set of master metadata. It is such a powerful driver of adoption to be able to meet the various parts of the business where they operate. When planning for extensibility, one should be forward thinking. Ask questions during design and be mindful of future expansion. Talk to other parts of the business to understand how they operate. What levels do they report or plan at? What is on the roadmap? Is there any defined need for extensibility that can be captured now to facilitate future adoption? Horizontal extensibility should be applied with a purpose. Define the need for extending and get alignment throughout the business. Parent members can be moved between dimensions since there is no data stored in the database at that level, but base members cannot change dimensions. If there is a plan to include certain members or hierarchies in a data set in the near future, you may want to incorporate them now. With proper configuration, horizontal extensibility should then be utilized in Cube Views, Parameters, Business Rules, and more to drive standardization. Below are a few examples of how this can be applied. Member Filter Expansions Applying extensibility and utilizing the provided member filter expansions can allow for the same row/column set to be used across varying Scenario Types in OneStream. If the business has a standard Income Statement where the only difference between processes is the level of detail, it can be created as a single report and shared across Workflows. Two member expansion functions to point out here are .Where() and .Options(). .Where(MemberDim = Value) Example: A#60000.Base.Where(MemberDim = |WFAccountDim|) .Options(Cube = CubeName, ScenarioType = Type, MergeMembersFromReferencedCubes = Boolean) Example 1: Targeting a specific extension point A#19999.Base.Options(Cube = [Total GolfStream], ScenarioType = Actual, MergeMembersFromReferencedCubes = False) Example 2: In combination with XFMemberProperty() to create a more dynamic member formula A#60000.TreeDescendantsInclusive.Options(Cube = |WFCube|, ScenarioType = XFMemberProperty(DimType = Scenario, Member = |WFScenario|, Property = ScenarioType), MergeMembersFromReferencedCubes = False) Calculations The same member expansion functions shown above should be considered when writing calculations across the platform. They can be a tool to make calculations more dynamic and necessary at times to make them more targeted. Another consideration is the function api.Data.ConvertDataBufferExtendedMembers when copying data across extended dimensions. A common need is the ability to copy actual data into a forecast and this function is a performant way to do so while also accounting for extensibility. The ConvertDataBufferExtendedMembers function aggregates the data from extended members in the source data unit to the base level of the target data unit. After aggregating the data in memory, it can then be manipulated and/or stored using the target dimensionality. Additional information on utilizing ConvertDataBufferExtendedMembers can be found in the OneStream Finance Rules and Calculations Handbook and the Tech Talks series on OneStream Navigator. Finally, when applying horizontal extensibility, it is important to keep in mind that it is not just applied to a single hierarchy. The business must be mindful of all alternate hierarchies and incorporate extensibility there as well. It should also be thought through how certain extension points in one dimension could impact its use in another dimension. For example, excluding balance sheet accounts from a forecast could impact the ability to make use of a Cash Flow dimension and corresponding calculations. Conclusion If applied properly, horizontal extensibility can provide amazing benefits. Reduce technical debt by incorporating many fragmented processes Encourage adoption by meeting users where they operate Facilitate reporting and reduce data movement/maintenance Provide a single source of truth The topics outlined in this article should be discussed and utilized during a design to properly apply horizontal extensibility. For additional examples, the CPM Blueprint application can be referenced. Examples in this application include Accounts, Geography, Product, Cost Center, Customer, and Vendor.Extensibility Series: An Overview of Extensibility in OneStream
What is Extensibility? The concept of Extensibility in OneStream is the capability to incorporate multiple use cases and future growth with a single foundation. I like to relate this to a dinner table that can expand and add additional table leaves while maintaining the same integrity. The OneStream platform, in tandem with Workflow and Extensible Dimensionality expands on this concept by providing users with multiple ways to extend their platform footprint. When designing an application or planning for expansion to the existing footprint, these concepts are crucial to understand and apply correctly. Extensibility in OneStream is a broad topic and can mean something different to each person in the community so I would like to break our language on this topic down further into the following categories: Horizontal Extensibility Vertical Extensibility Workflow Extensibility Platform Extensibility Horizontal or Scenario extensibility relates to the ability to extend and use different levels of a hierarchy for different business purposes. It also provides the ability to target when and where dimensions need to be included in the data model. Have you ever wanted to input data at a parent level? Through horizontal extensibility, that parent can become a base for input in a different scenario by using the scenario type settings and properly applying Cube Dimension Assignments. What if you have highly detailed metadata that only applies to a specific use case? Horizontal extensibility can help limit the potential intersections that aren’t valid for all the other use cases by assigning it only where it makes sense. Vertical or Entity/Cube extensibility relates to the ability to include/exclude detail at different levels up the entity hierarchy. The Data Unit is a key concept to understand in OneStream and it is important to properly manage its size to allow for optimal performance while accounting for future growth. Vertical extensibility also relates to varying dimensionality across business units. When you report consolidated financials, do you need to see the lowest level of department detail? Each individual product? Every project? The most granular GL accounts? If the answer is no to any of these, vertical extensibility can help. Lower-level entities can still report at a detailed level, but the data can be collapsed to a summary level to facilitate the reporting and increase performance. Does your organization have Business Units with very different operations? Perhaps vertical extensibility can provide the flexibility you need to vary the dimensionality at a detailed level but consolidate to a common summary level. Workflow extensibility relates to the ability to vary the input steps & methods within each process flow. Workflow steps and settings can be adjusted on each scenario type or can be combined if multiple processes follow the same responsibility hierarchy. Workflow extensibility can be configured on each parent cube to tailor the software interface to match the process needs. Is your Actual data collection process more import driven and the Planning process more forms, calculations, and dashboard driven? Workflow extensibility can help split these processes and make them easier to manage from an administration standpoint. Are some data collections imported in a centralized fashion while others have their responsibility distributed to more end users? Entities can only be assigned once in a Workflow hierarchy so to vary the entity signoff responsibilities, Workflow extensibility should be utilized to allow for differing entity assignments. Platform extensibility relates to the ability to vary where data is stored and how it is utilized within the platform. It also includes the ability to have multiple applications within one environment that can talk to each other. OneStream has the unique ability to consume, utilize, and report on data regardless of if it is stored in cubes, relational tables, or even externally. The capabilities in this category are expanding rapidly and should be considered during all solution design activities. Do you plan at a named personnel level? By each individual capital project? It’s important to determine what is necessary in the cube for consolidated reporting versus what can live outside the cube to be reported on more at a base entity level. Through platform extensibility, we can combine cube data with relational data to achieve the optimal balance between performance and reporting needs. Is the process you are designing more operationally driven and your data dimensions more transient in nature? Perhaps none of a specific data set needs to live in a cube, or even OneStream at all. Platform extensibility allows us to utilize entirely relational data, web content, and even external data sets. How should one think about Extensibility? Extensibility is foundational to OneStream. It should be thought of as a tool as essential as the level. Without it, you can probably get the job done and, on the surface, it might look okay as well. But over time, you are likely to discover structural integrity issues. It is probable that what you built may no longer be able to do everything you need it to. We use extensibility to right-size data units. We use extensibility to input at the right level. We use extensibility to fit the business process. We use extensibility to set the foundation for the future. I’ve heard people talk about extensibility in that you are “locked in” to the choices you make now. While there is some truth in that, it should not be thought about as a box, but a key to the future. Applying extensibility opens the door to so many more options in the future. Design the process and use extensibility as the tool to bring it all together. As mentioned in the Guiding Principles article, the importance of designing the process cannot be stressed enough. Don’t look for a tool, look for a problem and use the tools provided. Be forward thinking during design and ask questions to all stakeholders to make sure future functionality is accommodated for. Be sure to understand how the business operates and what is on the roadmap so that the proper foundation can be built. Recommendations I will begin with a disclaimer, there is not a single be-all, end-all way to implement extensibility in OneStream. I have seen applications with no extensibility and ones with too much extensibility. While there is a middle ground that should be found, the applications without extensibility are those that much more commonly have issues. A lack of vertical and platform extensibility tends to lead to performance issues. A lack of horizontal and Workflow extensibility tends to lead to flexibility issues. The applications with too much extensibility less commonly run into performance or flexibility issues, but they do have a higher maintenance burden. This is why, as architects, it is our job to balance performance, usability, and maintenance when thinking about these four types of extensibility. It is our recommendation that extensibility be considered in every single design and that it should be implemented nearly every time. To not use extensibility should be an exception, not the norm. During a solution design, I like to fill out a matrix like the one below to visualize what detail needs to be included where. With this, you can start to shape the Scenario Types, cubes, dimensions, and any platform extensibility. When looking for extensibility configuration examples, look no further than our CPM Blueprint application. This application has example configurations using our leading practices. Looking at UD1 as an example, one can see our common configuration of a “MainUD1” dimension parent to summarize the BU and Cost Center details into a common dimension. This is a concept we apply to all user defined dimensions to facilitate both vertical and horizontal extensibility. To facilitate vertical extensibility, dimensional detail that is not needed in a parent cube can be collapsed by assigning MainUD1. The dimensional detail is then extended from “TotUD1” to expand into the necessary levels of detail for each data collection and reporting need. This allows both “None” and “Top” to be active at all levels in the dimensional hierarchy. Another example of extensibility on display in the CPM Blueprint application is in the cube configuration. Focusing on the financial reporting structure in this application, it follows our recommendation for a base-summary cube relationship between Business Unit and total company reporting. I commonly apply this configuration even if there is only a single child cube and a single parent cube because it opens the door to so many more options in the future: More flexibility to expand child cubes horizontally and plug in different dimensionalities Greater ability to collapse the data unit if its size becomes an issue Further future-proofing as it allows for more platform expansion with the same foundation Finally, this application also has Workflow extensibility on display. On the cube settings, you can see the connection between top level and base cubes. You can also see the Workflow suffixing applied in the CPM Blueprint application. In this example, the Actual Scenario Type has a different process flow and responsibility hierarchy from other data collections, so it has been given its own suffix of “ACT.” Budget and Forecast follow the same process flow and responsibility hierarchy so therefore share a Workflow suffix of “BUDFCST.” This allows each process to have its own configuration and entity assignment. Conclusion Extensibility in OneStream cannot be overlooked. During a solution design, each of the four types of extensibility should be weighed and discussed to see which tool is right for the job: Horizontal Extensibility Vertical Extensibility Workflow Extensibility Platform Extensibility If you conclude that extensibility is not right for you, be absolutely sure. If the choice was up to me, the benefits of future flexibility and performance reliability greatly outweigh the potential need for additional administration overhead and end user training that come with extensibility.Guiding Principles
Overview Whether designing or building a OneStream application, it’s vital to keep end-user experience, performance, and administration in balance. An application that lacks any one of them risks full acceptance and ultimately a successful rollout within the organization. In this article, you’ll read about guiding principles that will steer you towards establishing the balance needed within your application. Design Design the process. It’s important to know exactly who is doing what and when they’re doing it. By designing the process, you’re designing the Workflow. There are many steps that need to be completed throughout it. An accounting to reporting process may include: importing data from a source system or a file entering of data via form posting of journal entries (which can include preparation, approval/rejection and posting steps) calculating/translating/sub-consolidating data reviewing/rejecting/approving data publishing final reporting packages Alternatively, a planning or forecasting process often looks completely different from the accounting process: seeding data from actuals, prior forecasts or budgets updating drivers, limits, percentages creating targets for different departments, regions or business units adding/updating calculations calculating/translating/sub-consolidating data reviewing/rejecting/approving of final submissions The process includes knowing your business rules and member formulas along with how/when they will be triggered throughout the data submission process. Calculate, translate, and consolidate only when necessary and not excessively. One common request is to run calculations when a user saves data in a form. It’s important to know which calculations will run when that user saves. If the process or calculations haven’t been well planned, the system can run calculations unnecessarily and/or excessively and that takes time. This wait time negatively impacts performance (or perceived performance) as well as the end-user experience. Utilize extensibility in the Cube design. This foundational design principle should be incorporated into every OneStream application. Our customers’ end-user experience, data quality and application performance all benefit from extensibility as it’s one of OneStream’s many differentiators. Although the business requirements may not alert the implementation team that extensibility is necessary during the initial implementation, it provides future flexibility should the need arise. The maintenance of extended cubes and dimensions may not be as straightforward as other products however, administrators will quickly learn where and how to maintain them. Write efficient, concise calculations. Doing this in both business rules and member formulas improves performance by reducing redundancy and excess. You establish efficiency by pairing process knowledge with good VB.net and/or C# practices. Specifically for OneStream, keep the following in mind. Understand how, when and who will trigger the calculation – knowing the entire process, from data submission through corporate consolidation, will help you optimize application performance. Building the triggers into the process results in rules running only when necessary and not excessively. This couples tightly with Workflow design, covered later in this article. You can also get creative by building in “perceived performance”. Let’s take a forecast seeding process as an example. For the M9 forecast, OneStream needs to copy nine months of Actuals data along with 3 months of the prior forecast data as a starting point for the FP&A team. For simplicity, each month takes one hour to complete so once M9 closes, FP&A needs to wait 12 hours to begin their process. If we think about this, M1 – M8 have been closed for some time. We can seed M1 – M8 data while no one’s waiting for it to complete. At the same time, the prior forecast data has been ready weeks prior to the current month closing so we’ll seed that, too. Now, when M9 closes, the only data that needs to be copied is M9 Actuals. Although it still takes 12 hours to seed all months, FP&A users only need to wait an hour after M9 closes to start their process. This is what I mean by “perceived performance” – it’s not faster, but because end-users wait less, it seems faster. Add conditions for data unit dimensions – most calculations don’t need to run on both an entity’s local currency as well as a translated currency because OneStream translates the result of the calculation. However, there are exceptions to this, such as copying Net Income from the P&L to the Balance Sheet. When the NI is copied to Current Year Net Income in local currency, if OneStream were to translate this, it would use the closing rate and direct method – we don’t want this. In this case, we want the calculation to happen in both local and translated currencies. A second example of adding conditions on data unit dimensions is excluding the calculations from running on parent entities. Parents consolidate the values of their children. Once that happens, if the calculation runs again, the result of the calculation will likely yield the same result as the consolidation. This introduces redundancy and can negatively affect the overall performance of the application. Target specific intersections when writing clear and calculate statements – leaving a dimension open on a calculation when the calculation will yield results on a small subsection of that dimension doesn’t necessarily mean that performance will suffer. However, OneStream is evaluating intersections that will never yield data and that takes time. It may be milliseconds for that particular calculation but those add up quickly in applications containing a high volume of calculations. Minimize nested loops and eliminate looping over lists. By itself, looping through lists of text is fast and unnoticeable to an end-user. Updating metadata properties via apis within loops also has a minimal effect on rule performance. However, introducing database calculation calls into those loops is where the time begins to add up. To counter this, OneStream introduced filtering into their database calculation calls. So instead of creating a list of base members under a parent and doing a calculation, write the calculation and filter it on Parent.base. Set data buffers once, outside of any necessary For/Next loops – data buffers should only be used when cell-by-cell processing is necessary. While looping through each cell, avoid database calls. Instead, create a result data buffer and while looping, add the result cell to it. Once the loop completes, write the result data buffer to the database. You’re only hitting the database once rather than at every trip through the loop. Minimize writing zero or near zero data to the database – by wrapping your calculate statements with RemoveZeros. Unnecessary zeros in the database generally provide no value while increasing Data Unit size which can negatively impact performance. That being said, copying data between two scenarios that have differing View and No Data Zero View properties often requires loading/calculating zeros. Establish standards, minimize exceptions, and create dynamic, consistent artifacts. Inconsistencies among artifacts and within processes can increase maintenance complexity, lengthen the time to resolve issues and confuse end users and administrators. Standard naming conventions, a seemingly trivial subject, can greatly improve the administrator’s experience maintaining the application. Consistency among Cube Views allows easy adoption for both end-users and administrators. Sharing row and column sets and utilizing parameters and member filters eases maintenance. Conclusion The two most inadequately designed areas that I’ve seen are Workflow and extensibility. The requirements and design phases are understandably early in the implementation to know the exact steps and tasks that will happen in the final Workflow. However, it’s important to have a mid- to high level outline in mind. It provides the skeleton on which to build and gives the implementation team, including SMEs, an idea of what the Workflow may look like. You’ll often find that customers may push back on building extensibility into their application. Why? Not only is it a difficult concept to grasp for those new to OneStream, but it’s also a challenge for them to understand it enough to realize the benefit. I suggest that you collect the requirements and present a design that includes extensibility, explaining why it’s the best design for their application. It's important to keep all the principles in mind throughout the implementation. Regardless of if you’re writing rules or member formulas, building Cube Views or dashboards, or designing the cube structures or process, think about how it affects the end-user’s experience, the application’s performance, or the administrator’s responsibility to maintain the application once the consulting team departs. Original Source: Blueprint Bulletin1.4KViews5likes0Comments