Using a script to measure memory usage, I obtain values as high as 1.7 GB of memory usage by the OCaml process, while the test script by itself consumes about 5 MB.

Is this expected behavior? I would expect very little memory overhead from launching the child process, which would mean that almost all of the available memory is being consumed by the OCaml process.
But since the OCaml process itself does not crash, then this would mean that either it is aggressively overallocating memory (and later increasing its effective usage when there is a shortage, but being unable to free this memory for the external process), or that the error message might actually be unrelated to the Sys.command call. In the latter case, then there might be a bug, hence my report.

If the total amount of memory (including swap) is around twice the OCaml process size, I think this is expected. The Unix way of launching a process is to call fork() first, which duplicates the whole memory (and the open file descriptors and other system resources) of the calling process, then (in the child process) to call exec(), which deallocates most of these resources and launches the new command.

So in order to launch a new process, you need enough free virtual memory to duplicate the current process and this could explain what you are observing.