I have an extremely large (500TB) network share mapped and use Everything to index it. I also use the HTTP server to allow users to search the share. I have the index re-fresh once a day at midnight (since while Everything updates, search is unavailable).

My problem is that occasionally, Everything or the system itself will crash and upon restarting Everything, it refreshes the index, meaning the HTTP search is offline for an hour or so while the index refreshes. Is it possible to turn off index refreshing EXCEPT for the update time set in Options -Indexes - Folders?

Please try starting Everything and let it index the folders.
Once it has completed indexing the folders, exit Everything completely.
This will force Everything to save the Everything.db to disk so the next time you run Everything, Everything will try to load the database from disk instead of re-indexing everything.

I would recommend removing any NTFS indexes from Tools -> Options -> NTFS as these indexes could be causing Everything to rebuild the entire database.

The scheduled updates in Tools -> Options -> Folders occur in the background.
You should be able to use the HTTP server while this occurs.
Starting Everything after a crash may start one of these scheduled updates.

I have tried letting Everything load and index, then exit. Upon restart, everything does load from the .db file. However, almost any change I make to the Indexs -> Folders tab results in an entire database refresh.

For example, I have two network drives mapped:
\\ddn-cifs\500TB
\\filer\100GB

Attempt to monitor changes is off, schedule is set to update at midnight for both folders.

If I add a new share or remove \\filer\100GB, the entire index refreshes, meaning it has to rescan that 500TB folder.
As for those scans occurring in the background, the HTTP server does continue to serve the page, but it only serves the custom everything.gif I have at the top and a blank search field which displays nothing. A search of * returns 0 Results until the index finishes refreshing.

I have had "Include in database" unselected from NTFS tools page, is that the only way to remove the local index?

I have tried letting Everything load and index, then exit. Upon restart, everything does load from the .db file. However, almost any change I make to the Indexs -> Folders tab results in an entire database refresh.

This is the expected behavior.
I hope to make adding and removing folder indexes nearly instant for the next update.

I have had "Include in database" unselected from NTFS tools page, is that the only way to remove the local index?

Yes.
Please make sure auto include new fixed volumes and auto include new removable volumes are disable too.

I'm testing Everything on our SAN storage... congratulations, this tool is excellent
I also have the same issue as the OP. I'm indexing several SAN units with a total of about 100 TB, and about 10 million files. This index takes about 2 hours (don't ask). I have a scheduled update for 4am, but I'd like the current index to be available while a new one is built.

Changing any indexing options also causes the index to be rebuilt, which again makes it unavailable for 2 hours. All NTFS volumes are disabled, it's just network shares.

RAM usage after indexing is about 1.2 GB, which is fine. I'm using 1.4.1.809b x64.

Thank you!

Last edited by zybexXL on Tue Mar 07, 2017 8:45 am, edited 1 time in total.

May I also suggest an alternative/faster mode for network shares scanning ? (perhaps a new option "use fast Network folder scans" or something).
Enumerating folders with thousand of files is very slow over CIFS/SMB protocols. Windows does many I/O calls to get info for each file in a given network folder, making it extremely slow. To work around this, I implemented an algorithm in some of the Perl/Python scripts we use to cache folder contents based on the modified date of the folder:

Foreach folder:
- read folder MTIME
- if we have the folder in DB, compare with stored MTIME; if it's same, then contents are the same (POSIX requirement) - no need to update!
- if MTIME changed, then read new contents from network as usual, and update DB
- recurse into ALL subfolders (regardless of contents having come from DB or not). Same MTIME guarrantees that content of a folder is same, but tells nothing about subfolders, so we still have to recurse.

This improves scanning time of our volumes from 2 or 3 hours to just a few minutes, and greatly reduces I/O on the SANs.