Data Separation

After spending too much time and too often in importing data after requested changes into a complex medical multi-language solution I have decided to try moving toward the data separation model.

And after reading a few discussions about data separation, I have seen suggestions for planning at least 3 files:

a) the USER file

b) the DATA file

c) the SCRIPTS file

d) someone suggest also having a PRINT file

Well, that brings me to some questions before proceeding with a "good practice":

1) is it appropriate having all the data that will be entered and modified by the users will stay in the DATA file while maintaining the "tags" and "labels" global fields that do not change into the USER file ?

2) is it correct to have all the layouts (with portals etc.) with all the graphic relationships in the USER file while the DATA file has none (or almost none) ?

3) is it appropriate splitting the DATA file into 4 or 5 DATA files accordingly with the nature of the data ( for example ADMINISTRATION_DATA file, CLINICAL_DATA file, etc)

4) would be appropriate having the SCRIPTS files only for tasks like Import and Export ?

5) is it correct to avoid having the PRINT file and manage all the print scripts within the USER file ?

6) is there a sort of "white paper" with comprehensive detailed guidelines that I should follow?

Sorry for the many questions but they are all referred to the same final result.

Many Thanks,

Victor

p.s.:

- using FMP Adv 14 and moving into FMP Adv 16 within a month

- multi-platform solution Mac/Win, should be used both on Server and/or single computer

1) You may have real fields in the files (other than DATA). I'm presuming USER is the UI file. But you must consider them carefully, if there is a need to have fields without global storage.

2) Is there a need to have imports/exports that are PSoS (Perform Script on Server) or scheduled tasks? Then it may be possible that you have layouts for setting these up. They need not be end-user-accessed.

Relationships may be needed for some data fields. But you are correct: Limited is best.

3) I have several DATA files on some solutions. When there is a great number of records and/or the backups can be less frequent on some files, then the DATA is in more than one file/database. Keep in mind that security must be the same for all users in all files (account & pass), but privileges based on the tables, fields, & layouts in those files.

4) I would not have a SCRIPTS files. Some may be need in each file, but the bulk would be in the UI file where the layouts and relationships are defined.

5) PRINT has to do with scripts and layouts, so I don't see the advantage of a separate file for this.

6) There may be other sources for data separation recommendations. I don't recall what.

In the context of using multiple DATA files, since the relationship graphic of the USER (UI) file is very complex, would you also consider having multiple UI files each with much simpler relationship graphic ?

No, try to simplify your RG rather than making it more complex with multiple UI files.

The only time I might have a separate UI file is if I have a specific group of Users that do not need everything (but that's rare). Or I might have a UI File that allows the users to create layouts for reports without the ability to create tables and fields. Scripting may also be limited, but available for some reports.

If these are not scripted Set Fields, but summary or calculated AND use related data, then the relationship is required in the file where used. Basically: anything defined and calling anything related, would need to be on the graph.

Scripted Set Field can be used anywhere there is a TO on the RG where it is called.

Well, I will move very carefully, leaving all the graph relationship also in the DATA file and deleting each occurrence by checking its results.... lot of work but at least much less work for future updates.

I've been using a separated model for years (since FMP 11). I only use 2 files xxxx_App, xxxx_Data. It's 100% WebDirect. This works great since I am constantly adding new features to the application and only have to touch the Data file when I make structural changes. Here are a few things I have discovered:

I maintain identical relationship graphs for the Data file in both files (I don't remember exactly when/why I realized that I had to do this)

Create a script to synchronize the security accounts between files (New accounts can be created from the WebDirect app)

Most scripts and layouts are in the App file

I have a few small scripts in the Data file for managing security and logins which are called from the App file

I have one script in the Data file that redirects any attempted logins of the Data file to the App file

I maintain identical relationship graphs for the Data file in both files (I don't remember exactly when/why I realized that I had to do this)

I haven't dived too deeply into the separation model myself but I am really curious about this one. Again, without a lot of experience here -- I would expect that the only relationships you'd need in the data files would be ones that were essential for calculated fields, lookups, and housekeeping (deleting related records). In other words, you wouldn't need relationships that were only necessary for portals. Although it's entirely likely that there would be overlap, e.g. the relationship that is used to automatically delete related records has the same predicates as the relationship used to display related records in a portal.

As for point 2, it is possible, but not recommended to create accounts via scripting in the UI, as that can can be a risky attack vector. A lot of people do it, but Steven and Wim have denounced it. The problem is that once you go multi-file it almost becomes a necessity, so that the accounts get created and managed correctly.

Now to your point about the RGs:

a. You only need graphs in the data file to support schema-based calculations and such. (Beverly mentioned this above.)

b. If you attempt to maintain matching graphs, then you will be touching the data file way more often than necessary.

I find it hard to imagine that you adhere to this that religiously. It totally goes against why you would go to all the extra trouble to use the sep model in the first place.

I use data separation technique since FM 7 or 8, I don't remember when I started.

I am used to separate an application in two databases : the first for datas, simple layouts and minimum relationships, the second for layouts, reports, scripts and a complete description of relationships usint OT and GOT. Working like that permits to sell the same application to many customs.

When I set a new custom, I take an empty database that will be customise by user as need.

I have sometimes included tables in the UI file, the rule being that they are tables either of only global fields or "staging" tables used to support import and/or export operations and thus do not permanently store any data.

I maintain identical relationship graphs for the Data file in both files (I don't remember exactly when/why I realized that I had to do this)

The relationship graphs (RG) in the two files do not need to be identical. The RG for a UI file is probably much more extensive than the RG for a data file. In a data file, the RG only has to support relationships required for field definitions. In a UI file the RG has to support relationships to enable the UI.

When I use this I keep all necessary relationships in the data file. The UI has very few.

I also often keep development file that has the UI built into the data file. Just in case there is ever an issue with data separation, like the script index bug in FM16. This allows being prepared for a fairly quick move to a single file if needed.

With regards to identical RG's in both files: This happens to be a very easy thing to create from a single unified file and the resulting UI and data copies of the file will work correctly, the data file in particular will just have more relationships than it strictly needs unless you go in and remove them. So that may be where the idea that you needed identical RG's came from.

To "split" a single file into UI and data copies:

Save a copy of the file

Open one copy and double click each TO inside Manage | Database and redirect the base table reference from the current file to that of the other copy.

Once all TO boxes have italicized names indicating external data sources, click over the tables tab and delete all the tables

That's the minimum work needed to split a single file. Of course it leaves relationships, scripts, value lists and layouts in the data file that are not needed and which can/should be removed, but if the developer leaves them in place, you still have functional if not optimal UI and Data files.

When I use this I keep all necessary relationships in the data file. The UI has very few.

I also often keep development file that has the UI built into the data file. Just in case there is ever an issue with data separation, like the script index bug in FM16. This allows being prepared for a fairly quick move to a single file if needed.

That's interesting. I'm curious about your method. I've always found that the data needs much less support in the RG. I've also found the easiest way to unify a separated solution is to bring the data tables into the UI file.

If your script uses GTRR to bring up a found set of records, you need a relationship.

If you use a relationship based conditional value list

Any UI based calculation in script, conditional format or Hide Expression that refers to related data will need a relationship

On the data side

Only calculations that are part of the field's definition that refer to related data and looked up value field options need relationships. In my experience, that's a much shorter list of needed relationships than I find in the UI.

I suppose that you can whittle down the UI based relationships with ExecuteSQL in many but not all cases...

In the interface file, the above relationship makes sense. In DATA only the Invoices -> InvoiceItems would be connected, IF the count of related items is a field in invoices. This is a field definition and nothing to do with how or where the data is displayed. Field definitions needing relationships to resolve are in the file where they are called.

I've attached to screen shots to show what I meant. In the first image a TOG within the User Interface file contains nine TOs linked by eight relationships. The event table is represented in four TOs and the job item table is represented in two TOs.

In the Data file there is no need to link any of the TOs via relationships. Nor is there any need to have more than one TO for each of the tables.

Everyone here has advocated not putting relationships into either UI or data if they aren't needed in that specific file, but we usually find far more need for relationships in the UI file than the data file as there is relatively little functionality that needs relationships in order to function in the data file.

Back to the original question, do you plan on distributing the UI to each user to save locally, or will it also be hosted with the data file on FMS? If your solution has a good amount of remote access (WAN), then I'd recommend each user have a copy of the UI file saved locally on their device. With all of the scripting, layouts/CSS, graphics, etc. in the UI file, this saves having all of that information pulled down from the server: thus, there will be less latency and the solution will run in a more optimized manner.

Of course, if you have a bunch of UI files running around, you would benefit by having a system in place that both checks for and distributes UI updates. One possible solution is to have a table in the UI file that stores the UI "version" number. You would also have a field in the Data file to represent what the current UI version should be. Upon opening the UI file, a script would run that checks the current UI version against what the Data file says it should be. If there's a discrepancy, the user can be notified. One simple method to distribute UI updates would be to store the current UI file in a container field in the Data file and allow that file to be downloaded. Or alternatively, the current UI file could be hosted on your domain and then a script in the UI file could open the URL, which would cause the file to be downloaded from the internet.

Here's an example of the scripting. The field "GUI::GFileVersion" is in the UI file. The field "MAIN::GFileVersion" is in the data file. The field "MAIN::GFileUpdateLocation" is in the data field and it is the url where the current UI file is saved. In the example below, the file is force closed if the user does not have the most current version.

With my vertical market solution, I had separated my solution into 3-4 files, as such

UI file (this is the primary file and the majority of script reside here)

Data file

Prints file (this file exists to allow customers to customize reports and printouts, which FM is so great it, but not lose the changes when I send out a version update)

I had added a Reports file to that as well, as customers each had unique reporting needs and I ended up adding a lot of custom scripts and UI work. I didn't love this file as a lot of the reports could have been universally used, so I ended up incorporating a lot of it into the UI file and having reports print in the Prints file.

As a point of note, I've done A LOT of work like this and I like it a lot, but it isn't perfect in that you really have to understand the customization needs and how often end-users or power users may need deeper access to things like file references etc.

You might also investigate 360Work's new deployment tool. Even with data separation, the time comes that you have to update the data file and this new tool looks like a much nicer way to manage that process with much less chance of missing a detail such as failing to update a serial number setting. (And even with UUIDs, you sometimes need serial numbers in a solution.)

True, but nailing down every detail in a scripted approach can be problematic. I often find that I need those unstored calculations as a check on the scripted process to confirm that I have correct results. These fields are then hidden away from the user when the data sets are large. For smaller data sets, an unstored calc is fine and much simpler to implement.

and auto-enter calculations, looked up value settings produce stored results and thus performance is not an issue. Scripting can do the look up/auto-enter for you, but there's no need for those complications unless there are special circumstances that require it.

I've developed these habits based upon advice from someone who works with a database with tens of millions of records, where they are doing everything they can to optimize performance. I don't find it any more difficult to define a calculation in a script step vs in MD. To each his/her own.

I maintain identical relationship graphs for the Data file in both files (I don't remember exactly when/why I realized that I had to do this)

When I started trying data separation by extracting one pair of related tables (Data) from a file (UI), I discovered that a field calculation in the new data table needed to refer to a value list that could only exist in the UI file, I think because of a relationship involving a global field in a UI file table relating to a field in a table from the data file.

Just to try to describe that in another way: I had a relationship where a global field in Table A was connected to a primary key in Table B, and a value list was calculated on a field in Table B based on values starting from Table A. Table B has a field that validates based on this value list, and when I split Table B into a separate table file, this value list could no longer exist in the data file because Table A wasn't in the data file hosting Table B.

Presumably you'd be keeping your data together in one file which would avoid this kind of problem, but I can see the usefulness of maintaining identical relationship graphs in the UI and Data files to avoid this kind of problem. I can imagine a similar problem occuring if you have a table of global fields in the UI file and value lists feeding off relationships between those global fields and tables in the data file.

(I haven't sorted this problem out yet because it's on a low-priority project and/or this particular validation wasn't crucial. It was an information-gathering foray into data separation on a low-priority data table so I could see how it works and begin to plan how I want to implement data separation on my other files. I hadn't thought of linking an instance of a table from the UI file into the data file, but I'm not sure how that would behave when I replace the UI file. I had intended on moving the other data tables into other separate data files–largely because I anticipate varying permissions requirements for the different data–but I haven't gotten around to that yet.)

Unless you have calculation fields or auto-enter field options that refer to data from a related table.

Yes, that's right. In this instance, none of the tables needed to be linked in the data file and that is why I chose that particular TOG.

I have always aimed to keep the data file extremely simple and that includes avoiding relationships between table occurrences where possible. The ExecuteSQL function could be used in calculations instead of having a relationship, or as WRossJames has said, the data can be handled by scripts, but I find it simple to have the relationship mapped in the graph.

but if needed, a value list can be defined in the data file to use the values of a value list in the UI file. Haven't needed to do that in a long time--and the new options for getting unique lists have reduced the need for that even more, but if you need that value list in the Data file, it's pretty easy to get it.

My thoughts are similar. I work in SQL dbs and there are no calculations, summary fields or relationship (for the data). These are all achieved with SQL queries on-the-fly. So I tend to make my db tables as "plain" as possible and use Script Set Field for the most part. This is as "separation" as you can get like a website with SQL backend and web app for the interface.

I think I started understanding it by using SQL Explorer, free from SeedCode. I saw how they were building their queries in a Let statement, with the fields, variables, etc. outside the query itself.

I think the real stroke of genius to that query approach is their thinking to use the Substitute function. They replace the '::' with a carriage return, so they get a return-delimited list, with the table as the first value and the field as the second value. I probably would have never thought to do that.

I have built a utility script with a bunch of different Set Variable steps with different query scenarios, like the one above. Here is another one (getting the COUNT of the aaid field in the T27Draws table, based upon two arguments):

I copy the Set Variable statement that matches my needs, paste it into the script I'm working on and change the fields and arguments as needed. I use Execute SQL all over the place, almost exclusively with relatively simple queries like this (no joins, etc). I build and test them in Data Viewer.

The GetFieldName is extremely important here. You can rename your tables(TO) and fields (in definitions and as TO's on the graph). And your query will work. Hardcoding does not prevent breakage is you do change any names to protect the innocent.

As for point 2, it is possible, but not recommended to create accounts via scripting in the UI, as that can can be a risky attack vector. A lot of people do it, but Steven and Wim have denounced it. The problem is that once you go multi-file it almost becomes a necessity, so that the accounts get created and managed correctly.

What other approaches could you use for security maintenance across the different files?

After spending too much time and too often in importing data after requested changes into a complex medical multi-language solution I have decided to try moving toward the data separation model.

While unusual, there are some reasons to use data separation model (e.g., needing to swap out UI's that are being separately developed). But you have never said why you want to do this. If you do not have a compelling reason to use data separation, you are increasing your work by having to maintain multiple relationship graphs in two systems AND you are having to do security in two different systems. Will you be adding users in both systems or will you be running a directory service? You have to make privilege groups in both system snow too, etc., etc.

Until you identify the compelling reason to use data separation, I'm going to recommend you stay away from it because it will possibly complicate your solution unnecessarily. I think you should keep things simple until complexity is needed.

Seems to me that the reason for data separation was made quite clear in the original post and is right there in the part that you quoted:

After spending too much time and too often in importing data after requested changes into a complex medical multi-language solution

Seems clear enough to me since reducing the frequency of data imports is one of the most basic reason for using data separation. The assumption being that you deploy UI updates far more often than you deploy data level updates.

Just wanted to chime in here that our company, Colibri Solutions, has been building FileMaker custom applications using the UI/Data Separation model since 2007. I spoke about it very briefly as part of my session at 2016 DevCon on Filemaker Custom Apps in Complex IT Environments, and we are working on an updated post for our company blog (colibrisolutions.com/nectar) on our implementation, its benefits, and things you need to do to make it work. Look for an announcement in the Community Discussions over the weekend.

Seems to me that the reason for data separation was made quite clear in the original post and is right there in the part that you quoted:

After spending too much time and too often in importing data after requested changes into a complex medical multi-language solution

Seems clear enough to me since reducing the frequency of data imports is one of the most basic reason for using data separation. The assumption being that you deploy UI updates far more often than you deploy data level updates.

Well I guess I'm just a stupid idiot... but it was not obvious to me that he is running multiple UI's for a common data set and reimporting into different solutions. But from what you say, I guess that is what he is doing. Just because he is spending too much time importing data didn't convey that to situation to me.

Regardless, I've seen some pretty slick solutions where they used the UI to determine the language to use in layouts or which layout to use, etc., that would not involve reimporting data. If it is common data, is there a reason he is not using a server to serve up the data? Maybe these are in remote locations in Africa and the client machines can't reach a server or its super slow? I guess I don't fully understand the limitations and why he is wanting to go this way. Maybe I just don't understand why he is doing all these imports, which is ok. Its just a discussion here.

Personally I would not be doing imports at all and host it on FMS and I would have one solution and adjust the UI for different languages within the same solution.

But if he cannot use a server and needs to be moving the data from FMP to FMP and have different UIs for them, then yes that is a reason for separation model and it'll work. Just watch out for things like field ID numbers when migrating from one version to the next because FM goes by them and not field names.

Just wanted to chime in here that our company, Colibri Solutions, has been building FileMaker custom applications using the UI/Data Separation model since 2007. I spoke about it very briefly as part of my session at 2016 DevCon on Filemaker Custom Apps in Complex IT Environments, and we are working on an updated post for our company blog (colibrisolutions.com/nectar) on our implementation, its benefits, and things you need to do to make it work. Look for an announcement in the Community Discussions over the weekend.

- James Wesolowski

Colibri Solutions

NYC

I've talked with several guys with vertical SBA solutions where they have a LOT of the same solution and this works really well for those cases. Especially when there needs to be a lot of UI updates and the backend rarely changes much. You just send out the UI to replace the previous one. Maybe you were the guy who did a presentation on this a couple of years ago at Devcon.

I just find I run into lots of people wanting to do this when they only have a couple of different UIs and in that case, the amount of effort that goes into things like security outweighs the benefits of separation model. But there clearly is a point where it is beneficial to the needed solution. The real benefit is when the backend file is large and the UI is small, then making UI updates and sending them out is much faster than spending a single solution with all the data. One company I know does this has to update a lot of forms frequently. So updating just a UI is much more efficient and distributing just that to all the clients.

Don't see "multiple UI's into a common data set" here at all. Yes, that's possible, but you don't need a common data set with multiple UI's to justify the need for data separation. I read this as a typical "vertical market" situation where there are multiple customers with their own UI and Data file sets. The developer then comes up with an upgrade and, understandibly, doesn't want to have to run data imports (or have the customer run data imports) just to deploy an update unless it cannot be avoided.

And thus data separation is then used to reduce (not eliminate) the need for such data imports by restricting the need for such only to any newly modified data files in the solution.

PS. I certainly did not call nor imply that you were stupid. I can miss the obvious myself just as often as anyone else (just ask my wife. ) But I chimed in on this to save the OP from having to make what seemed to me an unnecessary additional explanation.

This has worked for you in a high volume production environment? In previous attempts to see how this performs, I ran into a significant number of issues in a multi-user environment where records are created near simultaneously. As well as some performance issues as table sizes grow.

WRossJames wrote:

Rather than using auto-enter for serial number fields, I just set them using Execute SQL. (Setting the field to MAX + 1.) I do this in the "new record" script.

That way you never need to worry about updating a serial number in Manage Database.

Here's my calculation for my Customer# (My "serial number" field is "aaID" for all tables.):

I agree with Joshua here. What can happen is that two users doing this at nearly the same time can both get the same Max value and then generate duplicate values. Auto-entered serial numbers are darn near bullet proof, for data entry, but you have to be care during imports for upgrades and crash recovery that you also update the next serial value. This is easy to include in an update/import script, but also easy to forget or overlook.

One of the reasons that I like the idea behind 360works new deployment tool is that it generates brand new import scripts that automatically find and update such serial number fields. (And yes, UUIDs avoid this issue altogether, but there still can be very valid reasons for sticking with serial numbers--sometimes for fields that are not actually primary keys but "audit labels" made visible to the users.)

And they don't have to be exactly the same time. One user need only get the "max" before another user's actions increment the field being "maxed" and commit that change. It's a low probability event, but you don't want to take such risks when they are easily avoided just by sticking with the auto-entered serial number or using UUIDs.

Yep, I use UUID's for key fields. Gets rid of any of these problems in syncing.

Also, I make the decimal UUIDs (convert hex to decimal) for performance and an prefix them with a timestamp as number so that they will sort in sequence and I can always back calculate the date when a record was created.

Which is fine until you hit a design requirement such as a visible serial number on every invoice that must be gap free to prove to auditors that they are looking at the correct data, then a UUID just won't do, even though you might well still use the UUID as your primary key and the serial number as a "label" field. The consequences of messing up the next serial value setting is then less catastrophic, but you still need the serial number and still need to update the "next value" property.

The main difference, if I understand the explanation I received a long while back, is that the serial number functionality either is queued up and processed one at a time, or is locked until the record is created ( I don't remember which ). Thus preventing the 2 records from returning the same serial number.

Using the calculation engine functions ( including max and ExecuteSQL ) don't have that same protection. Your margin of error would then be based on the execution time of the script and function. Along with the commit time. Since the calc engine exposed to us can't see the record until it's committed. The probability of a collision is actually relatively high when compared to other methods.

In a case like that, I would instead create a table of serial numbers, and mark them as they are used. Take out a record lock. If successful, and not used, then use that value.

Which is fine until you hit a design requirement such as a visible serial number on every invoice that must be gap free to prove to auditors that they are looking at the correct data, then a UUID just won't do, even though you might well still use the UUID as your primary key and the serial number as a "label" field. The consequences of messing up the next serial value setting is then less catastrophic, but you still need the serial number and still need to update the "next value" property.

This is a valid point, Phil. But there are ways around it such as the Invoice Number is not assigned until it is done in a batch in a related table that only the accounting department can run. It will do the generation and not the creation of the invoice record. So there are solutions. But most importantly by using UUIDs as your primary key, you don't have problems of duplicate serial numbers and that is what is being solved here.

Yes, you get gaps if records are deleted. The same is true for other numbering methods, but an auto-entered serial is far safer. When implementing a serial number scheme that must be "gap" free to keep auditors happy, you don't allow record deletes. Instead, you mark a record that you want to delete as "Void" to avoid the gap. That would be necessary using other methods as well, but with the built in serial number feature, you don't have to worry about duplicates as long as you correctly manage next serial number settings during imports into new version or from recovered files.

Such as the Invoice Number is not assigned until it is done in a batch in a related table that only the accounting department can run.

That's not always acceptable. It may need to be visible on printed copy delivered to the customer and then you are back in a situation where you risk duplicates if you don't use the auto-enter feature. (Imagine having a customer ask you to pull up a past invoice by specifying the UUID )

To me it's all beside the point. Auto-entered serial numbers remain a very simple way to get what you need that avoids the need for added design work to get the same result in a great many situations. UUIDs may also do the job, but not necessarily any better, UUIDs have their own issues that you also have to respect.

I only depart from using serial numbers when there's a clearly specified requirement that either cannot be met by an auto-entered serial number or for which another method becomes the better option. Synching multiple devices back to a common table is one very obvious case where UUIDs shine. Using one field as the foreign key to the primary key of multiple parent tables is another.

The main point that I am emphasizing here is that, in my opinion, many developers dismiss auto-entered serial numbers for reasons that I do not find nearly as compelling as they do.

Interesting. A Text type field then, right? Or, what do you mean by "creation

timestamp as number"?

I am timestamping in another field, so I could use that field in sorting,

right?

Best regards,

Ross

I make the UUID into a UUIDdec (hex to decimal) and prefix the timestamp as a number at the front of it. This keeps the UUIDdec number in creation sequence in the same way a serial number would be. So you can tell easily which record came first. I still have a separate field for Timestamp creation. But here is the UUIDdec formula I use:

Let ( [

F1 = Get ( UUID ) ;

F3 = Get ( CurrentHostTimestamp ) ;

F4 = GetAsNumber ( F3 ) ;

F6 = F4 & Substitute ( F1 ;

[ "0" ; "00" ] ;

[ "1" ; "01" ] ;

[ "2" ; "02" ] ;

[ "3" ; "03" ] ;

[ "4" ; "04" ] ;

[ "5" ; "05" ] ;

[ "6" ; "06" ] ;

[ "7" ; "07" ] ;

[ "8" ; "08" ] ;

[ "9" ; "09" ] ;

[ "A" ; "10" ] ;

[ "B" ; "11" ] ;

[ "C" ; "12" ] ;

[ "D" ; "13" ] ;

[ "E" ; "14" ] ;

[ "F" ; "15" ] ;

[ "-" ; "" ]

)

] ; F6 )

You can make it into a Custom Function and just use it for the primary key field of each table.

PS: Keep it a NUMBER field so it performs better per Jeremy Bante's reasoning!

taylorsharpe - is that your whole calculation? It's missing part of the conversion from base16 to base10. Or are you just converting it to be a number.

Yep, that's the whole formula. All we need for this purpose is simply to get each hex digit into a decimal format. The actual value is irrelevant. So this isn't a real hex to base10 conversion... only converting individual digits in their sequence to 2 digit decimal and ignoring the hyphens. But good question in case someone didn't realize what was going on here. Obviously Josh is a mathematician and bet this really confused him since its not real math.

In some case, and very often, I can't use UUID as serial number because items must be numbered in chronological order. I mean per example that the first bill of the month me be numbered 1, the following one 2 and so on till the end of month.

If i use UUID, the person who control my job will not be very happy.

Sometimes, I use three fields, one as UUID, the other as serial number with or without right to modify and the third for specifying something special as BILL_nnnn (BILL_026) or ACC_nnn (ACC_099) or factory_number_nnn (1_987) or city_shop_nnn (LA_21_876).

One thing is for sure, I prefer to have a single application that can hang on to multiple clients than multiple applications that each hang on to a single client.

That's why I have a Filemaker database where I create the OT and GOT, the relationships diagram, the scripts, the layouts, the lists of values (with or without local or external tables) and a FileMaker database that does not contain only basic tables and layouts to see the raw content of the tables.

The application database is also the only place where tables and the global table for localisation are done,

Sometimes, I use three fields, one as UUID, the other as serial number with or without right to modify and the third for specifying something special

ditto! IF I need a serial, it is not the UUID/primary key. The second as serial with right to modify (usually by script) might be a sort field. In a child record situation, this only needs to be numerically sortable in relation to the parent and need not be sequentially serial. For example:

Many thanks again to everyone for the feedbacks about my original question.

I did my "home work" and I tried to split the original solution into two files: UI file and DATA file.

Changed all the reference Occurrences and checked multiple times the two files using BAseElements utility.

The new solution with data separation works fine.

BTW, presumably because the solution uses too many self join-tables and many cross-linking table among data, the final results is that the graphic representation of the tables is more complex in the DATA file and simpler in the UI file, which is exactly the opposite of my wishes (because it means that if I make changes I may need to modify too often also the DATA file).

But I believe that such problems belongs to my coding behaviour that could presumably be optimised using different strategies.

the solution uses too many self join-tables and many cross-linking table among data, the final results is that the graphic representation of the tables is more complex in the DATA file and simpler in the UI file, which is exactly the opposite of my wishes

It's good to hear you worked out a solution that meets your needs. As you an tell, often there are many different ways to approach a single problem within FileMaker, some being better than others. It is often good to talk through them and find the one that works best for you. Glad you did!