Thanks for the quick response. I checked df and nothing was full - all the local drives in fact were at 25% use. I checked config.ini and the command used to start the system and it appears we have not specified tmp-path - does it default to /tmp?

I ran the query and while it was running monitored the contents of /tmp, and used df to monitor the filesystem

df showed the 8 GB of available space on the filesystem being used up as the query progressed so that confirms your diagnosis. However, I did not see any additional additional files or any changes to the sizes of the files in /tmp

I subsequently used
du -hs

to monitor /tmp, and also the install directory for SciDB /opt/scidb/12.3 - no change observed

So I’m not sure how to fix the problem - unless /tmp is being written to and I’m not detecting it - providing more space to /tmp will appear to solve the problem.

We remounted /tmp to a much larger file system - that solved the problem - thank you.

I’m still wondering why there was no file reported in /tmp and none of the existing files in /tmp changed size. Seems like there is something funny going on with the OS

FYI, the raw array had ~150 million rows, 10 attributes (about half are null on average) - about 9 G in the original csv file. The operation peaked at using 17.4 G of space on the file system, based on df

edit: forgot to mention - definitely our mistake for putting /tmp on the small, root file system! Thank you again for troubleshooting the problem.

When SciDB creates temporary files, it “unlinks” them right after creation. So the files are in a special state where they will be removed as soon as SciDB decides to close them. The benefit is that temp files are not left laying around - they are removed as soon as possible, even in case the SciDB process dies unexpectedly. The drawback is that you don’t see these files . They may show up under a more low-level tool like “lsof” maybe.

Thanks for the explanation! I like the idea of removing them as soon as possible. I guess the idea is that if you did it with a “regular” file process, if the system crashes it won’t be around to delete the file. Kind of a neat way to fix that.

One suggestion for the user friendliness: You could touch an empty file whose name explains what is going on (or the file contains text further explaining the process):
scidb_is_using_an_invisible_temp_file_README.txt

that you delete if it finishes - and if that one doesn’t get deleted there is no harm since it is so small (< 1 K)