when running Profiler I'd suggest limiting the results to anything taking a lot of CPU at first, if it's a busy box there could be a lot of noise in there. After that saving the profiler output to a table on another machine so you can run some queries to see if the same query is occurring frequently and consuming all that cpu in lots of small chunks Also, I'd suggest doing number 4 first, in case there is a real doh moment in there. Someone zipping a backup to transfer to another machine has caught me out in the past. :)
–
RobinAug 15 '09 at 9:43

Perfmon -> CPU consumption per process, to be sure SQL is the offender

Perfmon -> Batches and Compilations per second, to see if you have a few nasty queries or a whole lot of small ones.

Generally (not having seen your setup) if you have a lot of CPU action and not a lot of disk activity, then it means that the data SQL Server is using fits in memory BUT is not indexed effectively. This means that queries expend a lot of CPU cycles scanning in-memory data pages, because the index structures to simplify and accelerate that process are missing. If the data were bigger, or the RAM smaller, this would reveal itself as an i/o bottleneck. But with plenty of RAM and a smaller data set, you get a CPU bottleneck instead.

So the next stop is a query to the Missing Index DMV, where you might find a lot of high-cost queries that are begging for better indexes. Take the results with a grain of salt, though, and implement selectively.