Hello everybody, I have a problem, where I didn't find some solutions until now. I have a program for calculating. For it I must load large data (10GB). This files will be written in a structure. After that I call a function which calculate with this data and an input (load by a excel file). This funtion have 5 interlaced for-loops. The results are well and all runs fine but to get a result I must wait to 10 hours. I have found that I can do the calculates parallel with the Parallel::ForkManager, where the steps of the loop will be swap out at a extra core. The first tests was fine but the used RAM will be multiply by the number of cores. With the modules threads/threads::shared you can share hashes and arrays. For sharing the loaded data, i changed the structure to a hash. Unfortunately the calculating needs more than 24 hours so I canceled the programm. The same happens when I use onlny 2 cores. Now my question: it's there a possibility to share static variables (in the loops the data of the loaded data don't will be changem, I only reading from the structure). Or somebody have an other idea how I can release this problem? Thank you very much

Now my question: it's there a possibility to share static variables (in the loops the data of the loaded data don't will be changem, I only reading from the structure).

If the data structure for the 10GB of data doesn't change during the course of the script, then you simply need to define that var at the file scope level prior to starting your threads. Each thread should then be able to access that data.

I would highly recommend profiling your script to see where it's spending its time and then work on optimizing the slow points.