Prior to
this release, Cassandra assigned one token per node, and each node owned exactly
one contiguous range within the cluster. Virtual nodes (vnodes) change this
paradigm from one token and range per node to many tokens per node. This allows
each node to own a large number of small ranges distributed throughout the ring,
which has a number of
important advantages.

The release provides
faster startup/bootup times for each node in a cluster, with internal tests
performed at DataStax showing up to 80% less time needed to start primary
indexes. The startup reductions were realized through more efficient sampling
and loading of indexes into memory caches. The index load time is improved
dramatically by eliminating the need to scan the partition index.

In previous versions, a single unavailable disk had the
potential to make the whole node unresponsive (while still technically alive and
part of the cluster). Memtables were not flushed and the node eventually ran out
of memory. If the disk contained the commitlog, data could no longer be appended
to the commitlog. Thus, the recommended configuration was to deploy Cassandra on
top of RAID 10, but this resulted in using 50% more disk space. New disk
management solves these problems and eliminates the need for RAID as described
in the hardware
recommendations.

Tombstones are evicted more often and automatically in
Cassandra 1.2 and are easier to manage. Configuring tombstone eviction instead
of manually performing compaction can save users time, effort, and disk
space.

Support for concurrent schema changes

Support for concurrent schema changes:
Cassandra 1.1 introduced modifying schema objects in a concurrent fashion across
a cluster, but did not support programmatically and concurrently creating and
dropping tables (permanent or temporary). Version 1.2 includes this support, so
multiple users can add/drop tables, including temporary tables, in this
way.