with awk i have been processing large text files where the first field
was a human-readable date that had to be converted to unix second. for
this purpose i was using an awk pipe call

{d="date -u -d "$1" +%s"; d | getline t; close(d); print t,...}

where $1 was holding the date string. unfortunately the files contained
on the order of million such lines and awk was able to process only
~2000 lines per second. the slowness was mostly due to the spawning of
the "date" process for each line.

then it occurred to me that i could use "date" as a co-process in awk
directly, spawning "date" only once and using the date's -f redirected
to standard input: