I am a novice. what is a parallelizing compiler? Is it limited to a
compiler that gets a sequentail program as an input and produce code
that explore parallelism for either a multiprocessor or a single
processor that has multiple functional units? That is, a HPF or
OpenMP compiler would not be called a parallelizing compiler because
the parallelism in the program is explicitly provided in the
programming language constructs like forall or compiler directives,
rather than automatically found by the compiler itself?

For an optimizing compiler that analyzes data dependence, usually what
is the granularity of the dependence analysis? A source program
statement (that is what I usually see in textbooks) or an intermediate
code instruction?

I would divide optimizing compilers into two categories: parallelizing
compilers that explores parallelization and locality-optimizing
compilers which explore local memory/cache locality by re-ordering
instructions so that contiguous instructions access the same cache
line, for example. The latter do not necessarily parallelize
instructions. Is this a fair classification?