- switched from mmap based writing to simpler stdio based writing. Has a
minor impact (0.5 microseconds) on microbenchmarks for asynchronous
writes. Synchronous writes speed up from 30ms to 10ms on linux/ext4.
Should be much more reliable on diverse platforms.
- compaction errors now immediately put the database into a read-only
mode (until it is re-opened). As a downside, a disk going out of
space and then space being created will require a re-open to recover
from, whereas previously that would happen automatically. On the
plus side, many corruption possibilities go away.
- force the DB to enter an error-state so that all future writes fail
when a synchronous log write succeeds but the sync fails.
- repair now regenerates sstables that exhibit problems
- fix issue 218 - Use native memory barriers on OSX
- fix issue 212 - QNX build is broken
- fix build on iOS with xcode 5
- make tests compile and pass on windows

Also,
* Fix link to bigtable paper in docs.
* New sstables will have the file extension .ldb. .sst files will
continue to be recognized.
* When building for iOS, use xcrun to execute the compiler. This may
affect issue 177.

When finding overlapping files, the covered range may expand
as files are added to the input set. We now correctly expand
the range when this happens instead of continuing to use the
old range. For example, suppose L0 contains files with the
following ranges:

F1: a .. d
F2: c .. g
F3: f .. j

and the initial compaction target is F3. We used to search
for range f..j which yielded {F2,F3}. However we now expand
the range as soon as another file is added. In this case,
when F2 is added, we expand the range to c..j and restart the
search. That picks up file F1 as well.

This change fixes a bug related to deleted keys showing up
incorrectly after a compaction as described in Issue 44.

Changed manual compaction code so it breaks up compactions of
big ranges into smaller compactions.

Changed the code that pushes the output of memtable compactions
to higher levels to obey the grandparent constraint: i.e., we
must never have a single file in level L that overlaps too
much data in level L+1 (to avoid very expensive L-1 compactions).

Added code to pretty-print internal keys.

- Fixed bug where we would not detect overlap with files in
level-0 because we were incorrectly using binary search
on an array of files with overlapping ranges.

Added "leveldb.sstables" property that can be used to dump
all of the sstables and ranges that make up the db state.

- Removing post_write_snapshot support. Email to leveldb mailing
list brought up no users, just confusion from one person about
what it meant.

Benchmark for evaluating this change:
$ db_bench --benchmarks=fillseq1,readrandom --threads=$n
(fillseq1 is a small hack to make sure fillseq runs once regardless
of number of threads specified on the command line).

- Added a C binding for LevelDB.
May be useful as a stable ABI that can be used by
programs that keep leveldb in a shared library,
or for JNI API.

- Replaced SQLite's readseq benchmark to a more efficient version.
SQLite readseq speeds increased by about a factor of 2x
from the previous version. Also updated benchmark page to
reflect readseq speed up.

- Based on suggestions on the sqlite-users mailing list,
we removed the superfluous index on the primary key
for SQLite's benchmarks, and turned write-ahead logging
("WAL") on. This led to performance improvements for SQLite.

- Based on a suggestion by Florian Weimer on the leveldb
mailing list, we disabled hard drive write-caching via
hdparm when testing synchronous writes. This led to
performance losses for LevelDB and Kyoto TreeDB.

- Fixed a mistake in 2.A.->Random where the bar sizes
were switched for Kyoto TreeDB and SQLite.

- Removing Chromium-specific files.
They are now going to live in the Chromium repository.

- Adding a benchmark page comparing LevelDB performance
to SQLite and Kyoto Cabinet's TreeDB, along with
code to generate the benchmarks.
Thanks to Kevin Tseng for compiling the benchmarks,
and Scott Hess and Mikio Hirabayashi for their
help and advice.

Slight tweak to the no-overlap optimization: only push to
level 2 to reduce the amount of wasted space when the same
small key range is being repeatedly overwritten.

Fix for Issue 18: Avoid failure on Windows by avoiding
deletion of lock file until the end of DestroyDB().

Fix for Issue 19: Disregard sequence numbers when checking for
overlap in sstable ranges. This fixes issue 19: when writing
the same key over and over again, we would generate a sequence
of sstables that were never merged together since their sequence
numbers were disjoint.

Don't ignore map/unmap error checks.

Miscellaneous fixes for small problems Sanjay found while diagnosing
issue/9 and issue/16 (corruption_testr failures).
- log::Reader reports the record type when it finds an unexpected type.
- log::Reader no longer reports an error when it encounters an expected
zero record regardless of the setting of the "checksum" flag.
- Added a missing forward declaration.
- Documented a side-effects of larger write buffer sizes
(longer recovery time).

Change atomic_pointer.h to prefer a memory barrier based
implementation over a <cstdatomic> based implementation for
the following reasons:
(1) On a x86-32-bit gcc-4.4 build, <ctdatomic> was corrupting
the AtomicPointer.
(2) On a x86-64-bit gcc build, a <ctstdatomic> based acquire-load
takes ~15ns as opposed to the ~1ns for a memory-barrier
based implementation.

This revision adds two major changes:
1. build_detect_platform which generates build_config.mk
with platform-dependent flags for the build process
2. /port/atomic_pointer.h with anAtomicPointerimplementation
for platforms without <cstdatomic>

Some of this code is loosely based on patches submitted to the
LevelDB mailing list at https://groups.google.com/forum/#!forum/leveldb
Tip of the hat to Dave Smith and Edouard A, who both sent patches.

The presence of Snappy (http://code.google.com/p/snappy/) and
cstdatomic are now both detected in the build_detect_platform
script (1.) which gets executing during make.

For (2.), instead of broadly importing atomicops_* from Chromium or
the Google performance tools, we chose to just implement AtomicPointer
and the limited atomic load and store operations it needs.
This resulted in much less code and fewer files - everything is
contained in atomic_pointer.h.

Fixed race condition reported by Dave Smit (dizzyd@dizzyd,com)
on the leveldb mailing list. We were not signalling
waiters after a trivial move from level-0. The result was
that in some cases (hard to reproduce), a write would get
stuck forever waiting for the number of level-0 files to drop
below its hard limit.

The new code is simpler: there is just one condition variable
instead of two, and the condition variable is signalled after
every piece of background work finishes. Also, all compaction
work (including for manual compactions) is done in the
background thread, and therefore we can remove the
"compacting_" variable.

Minor changes:
* Reformat the bodies of the iterator interface routines in IteratorWrapper to
make them a bit easier to read
* Switched the default in the leveldb makefile to be optimized mode, rather
than debug mode
* Fix build problem in chromium port