Maximizing Cube Utilization in OneStream Applications
Cubes control how data is stored, calculated, translated and consolidated based on the Dimensions assigned to it. Cubes are flexible and can be designed for a specific purpose (HR or People Planning Cube, Budget/Sales/Cost Drivers Cube or even Tax Cube) or data type in a OneStream XF Application. Dimensional assignments can vary by Scenario Type. An application can have multiple Cubes that can share Dimensions and data Separate Cubes may be used to hold data outside of the main financial Cube.506Views1like0CommentsData Processing and Performance - A comprehensive guide of tables, and design
Overview To maintain well performing application, one must understand how the underlying database works and more importantly its limitations. Understanding how a system works, allows designers and administrators to create reliable, stable, and optimal performing applications. This white paper is intended to guide the design of those optimal data processing strategies for the OneStream platform. First, this document will provide a detailed look at the data structures used by the stage engine as well as those used by the in-memory financial analytic engine, providing a deep understanding of how the OneStream stage engine functions in relation to the in-memory financial analytic engine. The relationship between stage engine data structures and finance engine data structures will be discussed in detail. Understanding how data is stored and manipulated by these engines will help consultants build OneStream applications that are optimized for high-volume data processing. Second, the workflow engine configuration will be examined in detail throughout the document since it acts as the controller / orchestrated of most tasks in the system. The workflow engine is the primary tool used to configure data processing sequences and performance characteristics in an OneStream application. The are many different workflow structures and settings that specifically relate to data processing and these settings will be discussed in relation to the processing engine that they impact. Finally, this document will define best practices and logical data processing limits. This will include suggestions on how to create workflow structures and settings for specific data processing workloads. With respect to data defining processing limits, this document will help define practical / logical data processing limits in relation to hard/physical data processing limits and will provide a detailed explanation of the suggested logical limits. This is an important topic because in many situations the physical data processing limit will accept/tolerate that amount of data that is being processed, but the same data may be able to be processed in a much more efficient manner by adhering to logical limits and building the appropriate workflow structures to partition data. These concepts are particularly important because they enable efficient storage, potential parallel processing and high-performance reporting/consumption when properly implemented. Conclusion Large Data Units can create problems for loading, calculating, consolidating, and reporting data. This really is a limitation of what the hardware and networks can support. Your design needs to consider this. This paper provides some options to relieve some of the pressure points that could appear. NOTE: some tables mentioned in the paper have changed in version 9+. See this note for further details.17KViews22likes0CommentsThe Magic and Math of C#Top
I hear that you’ve bought the OneStream Administrator Handbook – well, well, congratulations on taking the steps that will (hopefully) make your life as an Administrator easier! The book is brimming with great examples and use cases on topics that the Administrator will likely encounter, but here’s something a little extra – a more detailed break-down on how C#Top works between the base entities to their parents! This comes in handy when an end user new to OneStream, comes to you and says something along the lines of, “Hey, the sum of the base entities doesn’t equal to the parent entity, what gives?”1.5KViews6likes1CommentHistorical Restatement, No Code Required!
In today’s modern business world, a company’s organization structure can move more frequently than anticipated through changes like acquisitions, divestitures and mergers. The organization’s structure in OneStream is often represented in the Entity dimension. This blog discusses a scenario where an entity changed owners and how historical data was restated.612Views0likes0CommentsHow's your Week?
When you are designing an application, do you wonder whether or not you should be including weeks in your time dimension? This blog will look at the options available when you have a requirement for storing/reporting weekly data and answer some of the questions you should be asking yourself when deciding how to meet that requirement....1.2KViews4likes0CommentsUnlocking the Power of Attributes in OneStream: Balancing Potential with Performance
OneStream offers users the ability to activate Attributes for key dimensions such as Entities, Scenarios, Accounts, and User-Defined dimensions. The process to enable Attribute Members is relatively straightforward, found within the Settings -> Attribute Member section of the User-Defined dimensions. While Attributes hold the promise of expanding the Financial Model capabilities of OneStream, they also come with a caveat - the potential to impact system performance. In this blog post, we embark on a journey through the realm of Attributes in OneStream. We will delve into the opportunities they present for enriching your financial model and dive into the challenges that may arise, particularly concerning performance issues. By the end of this exploration, you'll have a comprehensive understanding of when and how to implement Attributes effectively, ensuring that your OneStream application strikes the right balance between functionality and performance.2.8KViews12likes2CommentsWhat is a Data Unit?
As you start to build and design an application, you may keep hearing the concept of a data unit. It is a critical concept and fundamental to how OneStream works. The following is an excerpt of the book OneStream Foundation Handbook by The Architect Factory. Not only do we cover data unit, but many design aspects and fundamental concepts to OneStream.5.2KViews2likes0CommentsCan a member have two or more different Account Types?
Have you run into the situation where a customer has a KPI or statistical account that they want OneStream to dynamically calculate for one Scenario Type and users to enter for another? If you’ve worked with OneStream’s dimension library, you know that varying Account Type by Scenario Type isn’t possible. There is, however, a workaround if the account needs to be reported for both Scenario Types on the same line. To set up our dilemma, we create an account called DynamicActInputBud. We set the Account Type and Formula Type as Dynamic Calc and Allow Input as True. The formula on this account, Return api.Data.GetDataCell("A#15000:O#Top/A#22000:O#Top"), is entered only on the Actual scenario and the Budget scenario formula is left blank. In the cube view, the formula result comes through for Actual, but the input for the Budget is not permitted. This is due to the Account Type being Dynamic Calc as it doesn’t store data to the database. So we go back and change the Account Type to Non Financial (leave the formula type as Dynamic Calc) and run the cube view again. This time the Budget allows input but the formula result doesn’t come through for Actual. If the customer is adamant about having that data on a single line that functions both as a dynamic calc as well as allowing input, you’re out of luck (as illustrated above). On the other hand, if it’s simply a reporting line and the dynamic calculation/input are completed at different points in the process, then there is a simple workaround. Basically, the solution involves setting up independent accounts for each use (Actual as Dynamic Calc and Budget as Non Financial) as well as a “pointer” account that will retrieve the value that’s sitting in each of the respective accounts. The pointer account would be set up as a Dynamic Calc with Allow Input set to False. The dynamic account (for Actual) would be set up as a Dynamic Calc with Allow Input set to False. The input account (for Budget) would be set up as a stored Account Type with Allow Input set to True. Then you reference the pointer account on the cube view and it would return the respective values of each independent account based on the scenario. 1. Set up the dynamic account (for Actual): DynamicAct Enter the Formula you want calculated on this account. 2. Set up input account (for Budget): InputBud Select whatever account type is required for your application. 3. Set up the pointer account: DynamicActInputBud On the Actual Scenario type: Return api.Data.GetDataCell("A#DynamicAct") On the Budget Scenario type: Return api.Data.GetDataCell("A#InputBud") NOTE: Be sure to write the Formula for Calculation Drill Down as well! 4. On your cube view, call A#DynamicActInputBud for the account. In the screenshot below, the account rows are: And the columns are: 5. Run the cube view and here are the results: 6. You can go one step further: to prevent entering data into the Actual Scenario on the InputBud account, add it to a Conditional NoInput rule (not shown in this blog post): This is just one example of using some basic creativity to solve for a customer’s reporting requirement.1.3KViews6likes0Comments