This is to validate the output specification for the job when it is
a job is submitted. Typically checks that it does not already exist,
throwing an exception when it already exists, so that output is not
overwritten.

Some applications need to create/write-to side-files, which differ from
the actual job-outputs.

In such cases there could be issues with 2 instances of the same TIP
(running simultaneously e.g. speculative tasks) trying to open/write-to the
same file (path) on HDFS. Hence the application-writer will have to pick
unique names per task-attempt (e.g. using the attemptid, say
attempt_200709221812_0001_m_000000_0), not just per TIP.

To get around this the Map-Reduce framework helps the application-writer
out by maintaining a special
${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}
sub-directory for each task-attempt on HDFS where the output of the
task-attempt goes. On successful completion of the task-attempt the files
in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only)
are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the
framework discards the sub-directory of unsuccessful task-attempts. This
is completely transparent to the application.

The application-writer can take advantage of this by creating any
side-files required in ${mapreduce.task.output.dir} during execution
of his reduce-task i.e. via getWorkOutputPath(JobConf), and the
framework will move them out similarly - thus she doesn't have to pick
unique paths per task-attempt.

Note: the value of ${mapreduce.task.output.dir} during
execution of a particular task-attempt is actually
${mapreduce.output.fileoutputformat.outputdir}/_temporary/_{$taskid}, and this value is
set by the map-reduce framework. So, just create any side-files in the
path returned by getWorkOutputPath(JobConf) from map/reduce
task to take advantage of this feature.

The entire discussion holds true for maps of jobs with
reducer=NONE (i.e. 0 reduces) since output of the map, in that case,
goes directly to HDFS.

Returns:

the Path to the task's temporary output directory
for the map-reduce job.

getUniqueName

The generated name can be used to create custom files from within the
different tasks for the job, the names for different tasks will not collide
with each other.

The given name is postfixed with the task type, 'm' for maps, 'r' for
reduces and the task partition number. For example, give a name 'test'
running on the first map o the job the generated name will be
'test-m-00000'.