Recent Discussions
Loading Amount and Annotations together
Hi, not sure if anyone came across with this one but thought it was interesting to share If you have a file which has in the same line amount and annotation you can set View to YTD (or periodic), then in the text value complex expression use this api prop: api.Parser.TextValueRowViewMember = "Annotation" That will make the parser to automatically generate the line(s) with the annotation if text value have a value. HTHSolvedfranciscoamores5 hours agoContributor II1.7KViews8likes3CommentsError when accessing the Workflow Import step
Hi Everyone, I was recently working within a Demo Environment where I suddenly started getting error message when trying to access the import step in the Load and Transform (Import) step in the Import Workflow. The Error above pops up every time anyone tries to access the Load and Transform (Import) step. Have anyone seen this issue before and knows how to fix it? NOTE: This issue is the same across all applications in this environment. Thanks, HamzaSolvedHamza2 days agoNew Contributor II266Views0likes4CommentsHelp with moving/copying multiple csv files from file share to external shared Azure storage.
Hello Community - Looking for extender BR to automate by coping bunch of csv files under file share folder on to external Azure storage, any help is appreciated. Thanks.fellow_Ones3 days agoNew Contributor III40Views0likes0CommentsIdentifying API Parameters for OneStream Audit Log Export via Postman
Hello all, I joined a company where they have implemented OneStream solution. I have access to the platform, and still exploring the way it was implemented. I am currently working on integrating OneStream with an external platform and need to export audit logs using the product's API. While setting up the request in Postman, i've encountered difficulty identifying the correct values for the following parameters: ApplicationName AdapterName Despite reviewing the available documentation, these tags remain unclear. Has anyone successfully queried audit logs via the OneStream API and can share insights on where these parameters are defined or how to retrieve them? Any guidance or examples would be greatly appreciated. Thank you in advance!12Views0likes0CommentsComposite rule for filled value only
Hi, is there a way how to apply composite rule on filled value only? If I use U8#* it takes every record filled or not. My intention is to use another rule behind this one to fill default value for blank source.Pawel7 days agoNew Contributor II15Views0likes1CommentBusiness Rule | One-to-One Transformation - E# Parent to its first E# Base
We need your help to Import flat files: 1. Number of Rows: > 100k 2. Organized under E# Parent Members: a few thousands 3. E# is extended We've been fortunate to be advised to build a BR to generate a one-to-one Transformation Rule file. - E# Parent to its first E# Base Is it possible for you SMEs to be generous enough to share with us/Community: - a sample BR to look up its first E# Base Member of an E# Parent Member - under extended E# - how to call that BR in an Import Workflow Thank you, OS SMEs.KH17 days agoContributor II75Views0likes2Commentsimporting a file with variable columns
We have been tasked with importing a file that has a large number of variable columns. For the sake of easy explanation, let's say the first five columns are standard (time, entity, ud1, ud2, ud3) but the file could have from 50 to 150 additional columns, one for each account. If there is no data for the account, there is no column for it. New accounts could appear in the future without warning. No, we don't have the ability to change the format of the report. (Oh, how I wish.) I have thought up several ways of making this work but each is fraught with its own type of peril. Create a new, custom table dynamically to stage the data. Parse the column names from the file. Use the column name list to run a new SQL query to unpivot. Parse the file in-memory to manually unpivot by parsing each data column and adding rows to a datatable, then returing the full data table. Maintain a list of the columns we care about the most, parse the file in advance and save the column name/position maps to parameters/a lookup table. Use up every possible attribute/value field in a data source to stage to BI Blend and try to unpivot from there. Hope they never need more "important" columns than OS can handle. (This is similar to option 1 but we're not stuck dropping/creating a custom table ourselves and we have more consistent column names.) Write a manual file-parser that creates a new, sane text file and then imports that instead. (Seems wasteful. If I can get it this far, I can probably just do it in-memory, ie, option 2.) Some other, better idea that I haven't thought of yet.Solved50Views0likes2CommentsConnect different OneStream SaaS deployments
Hi Folks, We're just starting on a OneStream SaaS implementation to replace HFM. We're far from the stage of actively working on any data integrations yet but I'm already starting to think about how things will hang together and how data will potentially flow in the future state of things. Two of the downstream feeds that we currently receive in our HFM system are companies which are current users of OneStream SaaS. Just thinking at a very high level for now, does anyone know if there are any ways downstream systems could submit data to us without punching out to the internet e.g. using Azure Private Link service and staying within the Microsoft network (or some other networkey magic)? High level pretty picture :) Obviously we wouldn't be able to configure this ourselves as this is SaaS but I'm interested to hear if anyone else has any experience exploring this or anything similar with OneStream SaaS and if anything like this is even possible in the SaaS world. Regards Craigchuggans14 days agoNew Contributor II71Views0likes6CommentsHow to convert a raw Excel file into multiple upload-ready OneStream journals. (3 quick steps)
Use case: You generated several journal lines in Excel. You saved it as a CSV file, and the upload to OneStream failed. Here’s how to clean it up and get it posted. 1) Paste your raw lines into Excel: Paste your raw journal lines into column A as above. Go to Data → Text to Columns → Delimited → Comma → Finish. You should now see the first column as H and the next two as D, D, with each field in its column. 2) Clean the values (common blockers) ConsName must be a valid Consolidation member in your app (e.g., Entity or Local). Leave ParentName blank unless you’re 100% sure of the direct parent. Don’t type literal None unless it’s a real member; leave IC/UD blank or use your member when required. Make sure Scenario → Cube mapping matches your CubeName. If JournalBalanceType=Balanced, totals of Debits = Credits per journal. 2) Save as CSV (UTF-8) Keep only the upload rows on the active sheet (no notes above/beside them). File → Save As → CSV UTF-8 (Comma delimited) (.csv). Accept “only the active sheet will be saved.” 3) Import & post in OneStream OnePlace → Journals → Import (or your Workflow step). Choose File Type = CSV, select your file, and Import. Open the journal, Validate/Balance, then Post. Run Consolidation and verify in a Cube View. Hope this helps some people within the OneStream Universe 😊.ChrisBriscoe16 days agoNew Contributor III17Views1like0CommentsHow to copy a scenario from one application to another?
Hello, I recently recovered a scenario from a restored application copy from a point in time. I now would like to extract that scenario from the application copy and put that in our working application. Does anyone have any idea on how this would be done? Thank you, Jeremy MorganSolved1.7KViews0likes7Comments