@CodeGnome I actually didn't understand the original answer until I Googled "parentheses in bash" on a hunch, and discovered it was the parens specifically that made it a subshell. (Before that, I'd thought it was the ampersand that made it a subshell.)
– jessepinhoApr 6 '16 at 8:12

1

@mklement0 While your suggestion is technically correct, it doesn't answer the question. The construct { command & } 2>/dev/null hides the initial job control message, but there is no way to suppress the '[1]- Done' message when the sleep completed.
– DaveDec 20 '16 at 1:14

5

@Dave: Thanks for making me dig deeper. Let me summarize differently: (sleep 10 &) conveniently silences both creation and termination messages, but you lose control of the background job. To avoid that, use { sleep 10 & } 2>/dev/null to silence the creation message, and use either wait or kill analogously to silence the termination message (which may not always be an option). Alternatively, set +m can be used to categorically silence termination messages, which, however, has many potentially unwanted side effects. My answer (hopefully) has the full story.
– mklement0Dec 20 '16 at 4:15

By using control operator &inside the subshell, you lose control of the background job - jobs won't list it, and neither %% (the spec. (ID) of the most recently launched job) nor $! (the PID of the (last) process launched (as part of) the most recent job) will reflect it.[1]

For launch-and-forget scenarios, this is not a problem:

You just fire off the background job,

and you let it finish on its own (and you trust that it runs correctly).

[1] Conceivably, you could go looking for the process yourself, by searching running processes for ones matching its command line, but that is cumbersome and not easy to make robust.

Launch-and-control-later:

If you want to remain in control of the job, so that you can later:

kill it, if need be.

synchronously wait (at some later point) for its completion,

a different approach is needed:

Silencing the creation job-control messages is handled below, but in order to silence the termination job-control messages categorically, you must turn the job-control shell option OFF:

set +m (set -m turns it back on)

Caveat: This is a global setting that has a number of important side effects, notably:

Stdin for background commands is then /dev/null rather than the current shell's.

The keyboard shortcuts for suspending (Ctrl-Z) and delay-suspending (Ctrl-Y) a foreground command are disabled.

For the full story, see man bash and (case-insensitively) search for occurrences of "job control".

To silence the creation job-control messages, enclose the background command in a group command and redirect the latter's stderr output to /dev/null

{ sleep 5 & } 2>/dev/null

The following example shows how to quietly launch a background job while retaining control of the job in principle.

$ set +m; { sleep 5 & } 2>/dev/null # turn job-control option off and launch quietly
$ jobs # shows the job just launched; it will complete quietly due to set +m

If you do not want to turn off the job-control option (set +m), the only way to silence the termination job-control message is to either kill the job or wait for it:

Caveat: There are two edge cases where this technique still produces output:

If the background command tries to read from stdin right away.

If the background command terminates right away.

To launch the job quietly (as above, but without set +m):

$ { sleep 5 & } 2>/dev/null

To wait for it quietly:

$ wait %% 2>/dev/null # use of %% is optional here

To kill it quietly:

{ kill %% && wait; } 2>/dev/null

The additional wait is necessary to make the termination job-control message that is normally displayed asynchronously by Bash (at the time of actual process termination, shortly after the kill) a synchronous output from wait, which then allows silencing.

But, as stated, if the job completes by itself, a job-control message will still be displayed.

"$@" should definitely be quoted. This won't work most of the time - only for executing single simple commands. You may as well just use bash -c 'cmds', or bash -s <<"EOF"
– ormaajJun 19 '12 at 10:20

2

Even better, using () to run in a subshell works: (echo toto&)
– static_rttiJun 19 '12 at 10:26

4

You lose job control that way. The forked process becomes a child of init. Running a separate script doesn't have that problem if you're really trying to manage parallel things, but it sounds like that isn't actually important to you.
– ormaajJun 19 '12 at 11:08

2

@static_rtti, "worse" is not quite the right way to describe it. Unquoted $@ behaves the same as unquoted $*, with all the same bugs. You don't want those bugs.
– Charles DuffyMay 5 '15 at 21:22

Interactively, no. It will always display job status. You can influence when the status is shown using set -b.

There's nothing preventing you from using the output of your commands (via pipes, or storing it variables, etc). The job status is sent to the controlling terminal by the shell and doesn't mix with other I/O. If you're doing something complex with jobs, the solution is to write a separate script.

The job messages are only really a problem if you have, say, functions in your bashrc which make use of job control which you want to have direct access to your interactive environment. Unfortunately there's nothing you can do about it.