Transformation Rules - Possible Bug
I have an issue, since we migrated to V9, OS is adding on its own a space after transforming a member and this prevent us from loading the data to the member. (This is in the validation step when it fails) Data source has been checked, adjusted, and tested in many ways. The extra space doesn't come from here. Transformation rules have also been tested in many ways, and the space is also not here, we event tried one-to-one, masks, all ,and this is not the issue The error is only happening with 2 UD3 members US4 & CA4 both ending in 4, I created a fake one for testing "US41" and this one works fine. I have been checking with OS support, but they just want to repeat over and over the same testing we already did with them on a call To me, after checking all that I could, seems like a bug in the new V9.1, but if anyone has any ideas please let me know.Matrix data load with entities in columns
Is there a way to set up a single matrix data source for a file with accounts in rows and entities in columns where the entities could change, as well as the number of entities? Can you set up matrix for max number of entities and read entity from specific row?21Views0likes1CommentIdentifying API Parameters for OneStream Audit Log Export via Postman
Hello all, I joined a company where they have implemented OneStream solution. I have access to the platform, and still exploring the way it was implemented. I am currently working on integrating OneStream with an external platform and need to export audit logs using the product's API. While setting up the request in Postman, i've encountered difficulty identifying the correct values for the following parameters: ApplicationName AdapterName Despite reviewing the available documentation, these tags remain unclear. Has anyone successfully queried audit logs via the OneStream API and can share insights on where these parameters are defined or how to retrieve them? Any guidance or examples would be greatly appreciated. Thank you in advance!52Views0likes3CommentsConnector rule - Drill back on dimension using a business rule logical operator
Hi, in a connector business rule, when a dimension has a business rule logical operator, is there a better way to build the SQL drill back query than reverse engineer what the business rule is doing? In the example below the business rule brings "Zero" to the stage when the source UD1 is null. The code in the drill back is reversing that to get the right source data: 'UD1 If sourceValues.Item(StageTableFields.StageSourceData.DimUD1).ToString.XFEqualsIgnoreCase("Zero") Then whereClause.Append("And (Department IS NULL Or Department = '') ") Else whereClause.Append("And (Department = '" & SqlStringHelper.EscapeSqlString(sourceValues.Item(StageTableFields.StageSourceData.DimUD1).ToString) & "') ") End If However, this is a simple business rule. I am wondering if there is a way of getting the source data before it is passed through the logical operator business rule, to reduce code complexity in the drill back. Thank youSolved1.6KViews0likes3CommentsFTP File Load Best Practice
When I pull a flat file from an SFTP server, I have been parsing that file in VB .net into a Data Table and loading it that way. Is this the best way to do this, or is there a way to make the Import read the file from the filesystem once it has been downloaded? I feel like parsing a csv file using Split() is risky. Thanks, ScottSolved27Views0likes2CommentsLoading Amount and Annotations together
Hi, not sure if anyone came across with this one but thought it was interesting to share If you have a file which has in the same line amount and annotation you can set View to YTD (or periodic), then in the text value complex expression use this api prop: api.Parser.TextValueRowViewMember = "Annotation" That will make the parser to automatically generate the line(s) with the annotation if text value have a value. HTHSolved1.7KViews8likes4Commentsimporting a file with variable columns
We have been tasked with importing a file that has a large number of variable columns. For the sake of easy explanation, let's say the first five columns are standard (time, entity, ud1, ud2, ud3) but the file could have from 50 to 150 additional columns, one for each account. If there is no data for the account, there is no column for it. New accounts could appear in the future without warning. No, we don't have the ability to change the format of the report. (Oh, how I wish.) I have thought up several ways of making this work but each is fraught with its own type of peril. Create a new, custom table dynamically to stage the data. Parse the column names from the file. Use the column name list to run a new SQL query to unpivot. Parse the file in-memory to manually unpivot by parsing each data column and adding rows to a datatable, then returing the full data table. Maintain a list of the columns we care about the most, parse the file in advance and save the column name/position maps to parameters/a lookup table. Use up every possible attribute/value field in a data source to stage to BI Blend and try to unpivot from there. Hope they never need more "important" columns than OS can handle. (This is similar to option 1 but we're not stuck dropping/creating a custom table ourselves and we have more consistent column names.) Write a manual file-parser that creates a new, sane text file and then imports that instead. (Seems wasteful. If I can get it this far, I can probably just do it in-memory, ie, option 2.) Some other, better idea that I haven't thought of yet.Solved61Views0likes2Comments