How to convert a raw Excel file into multiple upload-ready OneStream journals. (3 quick steps)
Use case: You generated several journal lines in Excel. You saved it as a CSV file, and the upload to OneStream failed. Here’s how to clean it up and get it posted. 1) Paste your raw lines into Excel: Paste your raw journal lines into column A as above. Go to Data → Text to Columns → Delimited → Comma → Finish. You should now see the first column as H and the next two as D, D, with each field in its column. 2) Clean the values (common blockers) ConsName must be a valid Consolidation member in your app (e.g., Entity or Local). Leave ParentName blank unless you’re 100% sure of the direct parent. Don’t type literal None unless it’s a real member; leave IC/UD blank or use your member when required. Make sure Scenario → Cube mapping matches your CubeName. If JournalBalanceType=Balanced, totals of Debits = Credits per journal. 2) Save as CSV (UTF-8) Keep only the upload rows on the active sheet (no notes above/beside them). File → Save As → CSV UTF-8 (Comma delimited) (.csv). Accept “only the active sheet will be saved.” 3) Import & post in OneStream OnePlace → Journals → Import (or your Workflow step). Choose File Type = CSV, select your file, and Import. Open the journal, Validate/Balance, then Post. Run Consolidation and verify in a Cube View. Hope this helps some people within the OneStream Universe 😊.How to copy a scenario from one application to another?
Hello, I recently recovered a scenario from a restored application copy from a point in time. I now would like to extract that scenario from the application copy and put that in our working application. Does anyone have any idea on how this would be done? Thank you, Jeremy MorganSolved1.7KViews0likes7CommentsCan ERP transactional data be loaded into OS?
Hi, my company's OS data integration with the ERP system has been set-up to load monthly TB's into OS and for transactional tables to sit outside of OS but with the ability for OS to query and drill-back to view transactions. The transactional drill-back function is limited in that a drill-down first needs to be performed to get to the detailed dimensional intersections at the lowest UD level before a drill-back to view transactions (at that precise lowest level set of dimensional intersections) can be run. It is therefore easier to use a stand-alone BI viewer outside of OS to view transactions. Am wondering - is it possible to load ERP transactional tables into OS (this may contain millions of rows spanning multiple years) and for OS to then display transactional level detail across higher-level roll-ups (e.g. viewing transactions across a cost centre parent rather than only at an individual cost centre level), or even display transactional detail in quick views, essentially making OS have similar functionality to a BI viewer?109Views0likes8CommentsRunning Validate and Load Steps for Multiple Years
Hi everyone, We are uploading portions of our budgets in Excel templates across multiple years. The data is imported across all the years in one shot, but the validate and load steps need to be run year by year if imported through the workflows we have set up. Does anyone know if there is a way to execute the validate and load steps across multiple years through a rule? I know we can use the batch harvest folders, but we're hoping to keep the workflow steps as close to those used in other processes as possible. Thanks, JamesSolved60Views0likes3CommentsImport data using multiple workflow profles
Hi everyone, I have an issue in understanding how to implement multiple import steps within a workflow. My client will load data through a classing import step throughout all the months of the year. In addition to that, they asked to set up 3 additional import steps (P13, P14, P15) to be used in December that will serve as "adjustments" to the previous data loaded on the cube. At each import step, however, the data will be loaded in full (with eventual adjustments) and should overwrite the data loaded through previous import steps (and this is the reason why we don't use forms but we use multiple import steps). For example, when P13 is loaded, it should overwrite the data loaded through the Import step, and when P14 is loaded it should overwrite the data loaded through P13, and so on. The problem now is that since we are using a different workflow profile at each step, the system does not automatically overwrite the data loaded through the previous workflow profile, and therefore the data is just added on top of the data that was loaded in previous steps. Does anyone knows whether there is a way to overwrite data from previous loads when a new import step is used? An idea I thought about was to delete the data on the cube through sql while executing the BR Connector rule but I don't know what tables in the db hold the data that I want to overwrite. Thanks in advance for any help, and if something is not clear please ask, I'll be happy to clarify!Solved1.6KViews0likes7CommentsOrphan members - validation error required
Hello dear community members, Background: We have had situations with month end workflow load validation errors which were fixed with UD1 mass upload. Few UD1 members ended up in Orphans, as the Parent did not exist yet in OneStream at the members mass upload step. The validation error disappeared and the workflow was complete. However, this created an Out of Balance and it was difficult to retrieve the issue. Questions/potential solutions help required: Would it be possible for the month end Workflow load Validation error to ignore Orphan members and still get an error even when we create members and end up by mistake in Orphans. Is there a way to delete Orphan members with a BR? Is there a way to stop the Metadata builder xml file to create Orphans when a parent is missing from OneStream? Thank you in advance for any idea/suggestion/tip/hint.Solved73Views0likes11CommentsIs there guidance on the setting for the number of parallel executions for Harvest Batch Loads?
When using the BRAPi.Utilities.ExecuteFileHarvestBatchParallel function in an Extender BR, does anyone have any guidance and/or opinions on the proper setting of the parallelBatchCount parameter? My understanding has always been to set it to a maximum of the number of CPUs - 1 , available for the server performing data management or batch load processing.119Views0likes2CommentsForm Update Via Excel is Appending, not Replacing
We have a form in our workflow for budgeting. Users have the option of interacting directly with the form or using a spreadsheet template we provided to update their values. There are pre-seeded values in the cube. Users can then make adjustments to the amounts. When users are updating via the form, we are getting values replaced like we would want. However, when users interact with the form and update via our provided excel file, the amounts are getting appended to the original value. We can't seem to locate where this behavior is triggered on upload to form via Excel. Is there something in the VB we need to adjust, or is this a 'standard' behavior. We'd like to let our users simply replace the values. Let me know if you need any add'l clarifications. Regards, DKSolved44Views0likes1CommentMatrix Data Load
We are looking to load data from a year-to-date (YTD) consolidated statement of subsidiaries. The statement is in matrix form, with accounts (both balance sheet and profit & loss) in rows and companies in columns. Each company has two sets of columns: one for financial results and one for eliminations. How can we set up the data source to read multiple companies' statements from one file? We would like to load eliminations to a separate origin (Elimination). How can we change the destination for the Elimination columns? Also, how do we set up bypass row so that the load ignores headers and total rows? I noticed those rows can be identified by the Account column, where if the account is blank, then bypass.SolvedValidation Error: No security access to data cell being cleared
Hi Team, We have a client that upload excel templates using xfdrange into OneStream. They have different files per Business Unit (UD2). Two of the files run into the following intersection error during the validation step: All the other files are correct, however, when they upload these two files, all data intersections (from all other files) are suddenly also invalid intersections. For examples, the line listed here is not from either of the two files but from a seperate file that has no errors when uploaded individually). When you would navigate to "Intersection" instead of "invalid Intersection (1)" all data intersections from all files are there. Only the local users get this validation intersection, when I (administrator) would run "Validate" from this screenshot, the error dissapears. Strange thing is: they have uploaded data on this intersection before in previous Forecast Scenarios without errors Security has not changed for either the application, user, workflow profile, entity etc. these two files both have an account that is only used there but is set up in the same way as all the other accounts (and previously could be loaded in other scenarios). Could just be a coincidence. Application Security: ModifyData is set to Everyone One thing to note is that this is a new forecast scenario we recently created for them, however, setup is the same way as the other forecast scenarios in terms of security. I am also having a hard time understanding the error and why it is refering to "clearing data" in the validate step. Anyone that can help me a little bit more in understanding the error or knows what could cause this? Thanks in advance!60Views0likes2Comments