During archive creation, large block of data in sparse map is overflowing size_t numbytes element of struct sp_array and corrupted file is stored in tar archive (zero block instead of first 4Gb of data block).
Instead of changing size_t to off_t for numbytes and other variables, I thought we should better split large data blocks in pieces smaller than 4Gb. Actually, I didn't think much on changing size_t to off_t since I'm not too familiar with the code, you may prefer that way.

IMHO it's important to use 0xffffffff instead of any "portable" defines, to ensure that archive created on 64bit computer (where size_t is 8 bytes) will not have data block in sparse map bigger than 4G, and thus could be unpacked by tar on 32bit computer.

Probably tar_sparse_scan() must also be called inside conditional, I can't check this because it's not implemented/documented and I don't know what it should do.