I think that this is the right place to use a batch-processing system. The web site would be able to “submit a request” to that system to do this particular thing. The work itself would be carried out by that system, which would also be responsible for regulating how many such requests could actually be in-progress at any one time. The web-page (and by extension, the web server) is merely a user interface to getting such things started and/or for monitoring their status. Said web-server has, and is not given, any direct control over the situation.

The way I would do it is to have a simple CGI script on the server that sets a flag to indicate that a dump has been requested by writing an empty file to a certain place. (After checking that the requesting user is authenitcated, and has the right to request a dump.)

Then have a cron job that runs every few minutes as a more privalaged user. If the flag is present it can do the dump and write it to a location that the web server can read. If there is no flag, or the last dump was to recent then it exits immedately.

From the end user's point of view the process is to visit a special web page, enter a username and password, then wait 5 minutes or so before downloading the result.

Went to join the gridlock to see it
Held an eclipse party
Watched a live feed
I cn"t see tge kwubosd to amswr thus
I tried to see it, but 8000 miles of rock got in the way
What eclipse?
Wanted to see it, but they wouldn't reschedule it
Read the book instead