As extracts get into the multi-GB range I don't see much staying resident which mandates fast disk I/O for them to continue to provide the best performance boost as they grow. I know the web side has a cache control but is there such a hidden knob for extracts? Thx in advance

Portions of the extract will be loaded into memory automatically (upon report execution) and will stay there until/unless that memory is needed by another process - however there are no knobs/switches which allow you to "pin" it there or pre-load it.

A good approach is to keep the number of distinct extracts being used by your reports to a minimum. Rather than associate a similar-but-different extract with each report you build, create a single extract which contains the fields you need for all reports, and publish it to Data Server as a data source. Now, all reports can use a single source of data, that single file will get loaded once, and it's less likely to get bounced out of RAM because there are fewer "competing" files also wanting to be loaded.

Also, keep in mind that looking at the number of bytes being consumed by particular Tableau processes won't tell you if and how much of the extract we're keeping in RAM - since we memory-map, these files, it's the OS doing all the work and that's where the memory consumption shows up.