The most common issue with predictive systems is the quality of the source data. Without proper data quality used for input, what can be expected of the outcome? That’s exactly where OneStream distinguishes itself. The platform has a very sophisticated and extensive data quality setup. Guided workflows, full transparency, audit trails, and data controls ensure that the quality of the gathered data is high. Using this data as the source of a predictive model will deliver high quality in the results of your predictive model. And the great news is, all that is now available to every OneStream client.
So, where do you start with Predictive Analytics? We’ll first describe the setup of the solution and after that the usage and outcomes.
Next to the configuration of database settings that are also required for many other Market Place solutions, Predictive Analytics requires the ‘installation’ of the OneStream Data Science Package. As you might now, data scientists often use Python as one of the instruments in their toolkit. OneStream offers a pre-configured folder that contains the required Python functionality to be used by this Market Place solution. The only requirement is to copy this folder to a location (server or shared drive without execution restrictions) that is accessible from your OneStream environment. The path where you paste this folder is used later during the configuration of Predictive Analytics. If you have a OneStream cloud, the support team will do this for you.
The next step is to import the solution zip via Application Tools > Load/ Extract:
Now the dashboard is available in OnePlace > Dashboards with buttons to create the tables and launch the solution.
In the right top, you’ll find the settings button where you need to enter the path to the Python library in the Global Options page. After that, you’re set up and ready to start configuring models. Make sure the path ends with Python.exe, so not just the folder!
The Predictive Analytics homepage contains 3 main items.
The first step is to set up a template from the Setup screen by clicking the Add button:
The template can be modified via the Edit button but once a model is run, the elements of the Cube View and scenario section cannot be modified anymore. However, you can also easily copy a template if you want to change some parameters.
The template fields are described in detail in the Setup instructions, so I won’t describe all.
Some interesting template fields are:
The data in a Cube View is the basis for your predictive analysis. The user executing the model must have proper access rights to that Cube View, which means the user can modify data and Access and Maintenance groups. Predictive Analytics doesn’t take the Cube View security into account, so this needs to be set in the Cube View itself to prevent any overrides.
Please note that in the current version it is not (yet) allowed to pull data from multiple cubes in a single Cube View as the source for Predictive Analytics.
Next to that, Bound parameters are required for the different dimensions in the Cube View:
Now your model is configured, you’re good to go.
After saving your template you can go directly to the Predict page via the buttons on the top. Here you need to select the template we’ve just created:
When you select a data cell in the Cube View that has a template available -for which you’ll need to be in the right Period, 2018M12 in my case – you can press the play button to calculate the model:
You can also press Calculate All to run the available models for your entire Cube View but that can take a while, especially if you haven’t selected specific models and set it to auto. On my demo machine, the environment freezes for a while). Calculating took 10 minutes in my situation:
When the model has been executed for the selected cell, the bottom panel of the dashboard will be populated with the original and predicted data:
After running it for 1 account, the forecasted data is automatically populated (on-base level) and visible in the cube view:
Not all historic periods you’d like to use have to be available in the Cube View. For example, in my case, the Cube View starts at 2018M6 while I use 12 months of historic data.
After running your model, you can clear the results, view the model in detail and comment on the outcomes:
Here you see the best matching models and can select one of them so show the figures attached to it. However, only the best matching outcomes are stored in your cube view.
There is also an Audit sheet where you can have an overview of which models have been run by whom on which accounts:
Here you can also review all comments.
For best results, a minimum of 18 months of data is required. For seasonality OneStream advises 24 months and ARIMA (Autoregressive Integrated Moving Average) even 48 months.
The current version supports the most common and frequently used forecasting methods, such as:
If you’re not really into statistics, the details of these methods are added as an appendix to the Predictive Analytics 123 Setup Instructions for your reference.
The solution comes with a few default dashboards that can also be embedded in your application dashboards. To do this the Server Task Argument needs to be set up properly, so make sure the right template is shown.
Predictive Analytics is a great addition to your OneStream platform, enabling users to create predictive models for more insight into your business.
Pros of OneStream Predictive Analytics:
+ Automatically selecting the best predictive model for your data that can even vary by account
+ Easy to configure and play with your models
+ No (technical) Statistical knowledge required to start modeling on your own data
Cons of OneStream Predictive Analytics:
– Running complex prediction models takes time. Dependent of the size of your dataset, you may need to have some patience.
The setup is described in detail in the documentation and the configuration is easy. Therefore, I classify this solution as Admin Configurable. Please see my earlier post about classifications.
Thank you for your sharing. Do you know if there is a way to see/export in a simple way which model has been selected for each line of my Cube View data set?
unfortunately i'm not. If you select the row you can see it in the dashboard so it must be stored. I assume you need to find it in the custom data tables that are created for this solution which you can find in System - Database.
So if you create a model, execute it, check in the dashboard which model has been used and find that value in the database. Via a data adapter you can run queries against these databases to show the value in a report/ dashboard.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.