Validation on Data Source: Restrict Entities to _PX Suffix and Metadata Text Contains PSC^
Hi Community, I’m working on a data load in OneStream and need to enforce two validations on the Entity dimension during the upload process: Entity name must end with _PX. Entity’s metadata Text field (e.g., Text1/Text2) must contain PSC^. (Basically Prevent data from being loaded to anything but a PSC entity) Currently, I see the Logical Expression and Override Settings on the Data Source column, but it seems limited to simple operators like Ends With or Like. My question is: Can this be achieved using a complex expression directly in the Data Source column settings? Or do I need to implement this as a Transformation Validate rule or a Data Source Business Rule? If a Business Rule is the right approach, could someone share a best-practice snippet for checking both conditions (including metadata lookup)? Should I create a conditional rule and add to the logical expression above?24Views0likes2CommentsCubeview Drill Down issue: Load Results for Importe Cells is empty
Hello there, We are having an issue when using the Drill Down option from a Cubeview, and it's that after getting to a base data cell within the Drill Down window, and then trying to navigate to stage using the "Load Results for Imported Cell", the results window is displayed empty: The workflow that is being used it's a simple Import, Validate and Load, that retrieves the imported data from a Connector and there is no further data processing once it's in the Cube. From the Workflow Page we can use Drill Down and Drill Back just fine. Any ideas of why is this happening? Thank you. Oriol20Views0likes3CommentsTime/data setting in data source/Transformation rule for Multiperiod upload (delimited files)
Hi Everyone, I'm trying to build Data source and transformation rule for multi period manual data load through single file. The import step didn't succussed. Can anyone suggest time settings in data source & transformation rule. I selected data type as "Stored Data key text" in Data source . My source file format is Col1 has GL accounts, Col2-13 Jan to Dec data.Solved53Views0likes5CommentsMatrix Data Load Basics
I have a matrix data load question. I get the concept of how the matrix data load works. I haven't done too many of them, so I want to make sure I'm not missing the basics. The file I have has the year on one of the first lines in column 9. The next row contains the headers with periods in 12 separate columns. Is there an easy way to get the year from the first line (without a lot of code)? Or should I get year based on the POV? Then I am guessing a complex expression in the time columns to build the OS time key? I know I can do all of this in code, but I don't want to skip over any out of the box functionality or best practices. Any good guides on some of the best approaches with the Matrix load? Thanks, ScottSolved1.2KViews0likes5CommentsHow do you know if you are using the Roslyn compiler?
Hotfix 8.4.3 addresses, among other things, security vulnerabilities found in the third-party Roslyn Compiler DLL. How do you know if you are using the Roslyn compiler in your implementation of custom code and, therefore, at risk if you are? Is the Roslyn Compiler only used by the WinSCP libraries (and therefore its removal is WinSCP-related)? Or are there instances where Smart Integration Functions would be using the Roslyn compiler outside of the use of WinSCP?Solved30Views0likes1CommentWorkflow channels with scenario copies
Hi All, In our multi year planning process, most entities need to upload four sets of fixed costs (all loaded by the same team, but sent to that entity's finance team by different functional teams), in addition to all the driver-based costs we're calculating (mostly based on form inputs). For the four sets of fixed costs, we have four import profiles under the same parent workflow. The entities import their files and then run a DM sequence from a dashboard to run the validate/load for a range of years they can select. When they run this sequence, all import profiles (under the same parent) that have been imported are validated and loaded. We had not set up workflow channels on the different loads, and everything was working as expected. The different import profiles will use some of the same accounts, but different sets of cost centers. We did not see any collision of data when the same accounts were loaded, and if a balance that had previously been loaded in one import was removed from the import template and reimported, the data in that account would be cleared from the cube (even if that account was only used by one of the four imports) after the next validate/load. Now that we have started to copy cube data from one completed scenario (CY) to another blank scenario (CY_v2), so that some changes can be made to CY_v2, we were running into issues with our fixed cost loads in CY_v2 (balances not clearing, other values being overwritten). This makes sense, and we had not considered loading the fixed costs to copied scenarios. We would like to be able to only load one of the four imports (if say the G&A load is the only one with changes) in the CY_v2 scenario. We were hoping that workflow channels on the imports would solve the issue in the copied-to scenarios, but after completing that set up (changing accounts to nodatalock, setting up different data source members for each of the four imports, creating the channels, and applying the channels to the data source members), we are still seeing balances "stuck" in CY_v2 when previously loaded balances to a unique account are removed from the latest upload for one of the imports. We had also gone back and reloaded the "source" scenario with the channels applied, before copying the data and trying to load only a single template to CY_v2. Is there a way to use workflow channels so that all data loaded through that channel is cleared and replaced when a new import is loaded to a data source member where the channel is applied, or will accounts no longer in the latest import be left un-cleared? The documentation on the level 2 and level 3 data units seems to indicate that those accounts will not be cleared, but that does not match the behavior we see in the original loads of multiple sibling import profiles (described and bolded in my second paragraph). It also doesn't seem to match how a typical, iterative planning process would work. Any feedback would be appreciated. Thanks, James37Views0likes0CommentsTransformation Rules - Possible Bug
I have an issue, since we migrated to V9, OS is adding on its own a space after transforming a member and this prevent us from loading the data to the member. (This is in the validation step when it fails) Data source has been checked, adjusted, and tested in many ways. The extra space doesn't come from here. Transformation rules have also been tested in many ways, and the space is also not here, we event tried one-to-one, masks, all ,and this is not the issue The error is only happening with 2 UD3 members US4 & CA4 both ending in 4, I created a fake one for testing "US41" and this one works fine. I have been checking with OS support, but they just want to repeat over and over the same testing we already did with them on a call To me, after checking all that I could, seems like a bug in the new V9.1, but if anyone has any ideas please let me know.Matrix data load with entities in columns
Is there a way to set up a single matrix data source for a file with accounts in rows and entities in columns where the entities could change, as well as the number of entities? Can you set up matrix for max number of entities and read entity from specific row?44Views0likes1CommentIdentifying API Parameters for OneStream Audit Log Export via Postman
Hello all, I joined a company where they have implemented OneStream solution. I have access to the platform, and still exploring the way it was implemented. I am currently working on integrating OneStream with an external platform and need to export audit logs using the product's API. While setting up the request in Postman, i've encountered difficulty identifying the correct values for the following parameters: ApplicationName AdapterName Despite reviewing the available documentation, these tags remain unclear. Has anyone successfully queried audit logs via the OneStream API and can share insights on where these parameters are defined or how to retrieve them? Any guidance or examples would be greatly appreciated. Thank you in advance!73Views0likes3Comments