As a general rule, the default load factor (.75) offers a good
tradeoff between time and space costs. Higher values decrease the
space overhead but increase the lookup cost (reflected in most of the
operations of the HashMap class, including get and put).

@PaulTomblin is load factor = bucket size/ number of keys ? If that is the case then the collisions should reduce because increasing load factor means increasing the number in the numerator provided number of keys remain constant.
–
GeekAug 25 '12 at 12:12

It has to do with how an HashTable is implemented under the hood, it uses hash codes and since the algorithm to calculate hash code is not perfect, you can have some collisions, increasing the load factor increase the probability to have collisions, and consequently reduce the lookup performance ...

capacity : this is number of buckets in any hash table at any given point in time.

load factor : The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased

so more the load factor is more occupied a hash table could get before the capacity is increased.

now given the best possible implementation of hashCode() only one value will go in one bucket here lookup cost will be minimum

in worst case all values will go in same bucket and lookup cost would be maximum

in an average case also, this will surely depend on hashCode() implementation but one more factor that will play here is load factor, as more occupied the collection will be, the more will be chances of collision and thus higher load factor will increase lookup cost in a non ideal scenario.