On Fri, 2010-04-02 at 20:17 +0000, Andre Robatino wrote:
...
> ls -lh dummy-file
>> instead shows a size of 700M. Should the test use -l instead of -s?
"ls -s" queries the total number of blocks used by a file, which includes
overhead (e.g. blocks used to record which blocks store the actual file
data). This is why your file that contains exactly 700 * 1024 * 1024
bytes of data (2048 * 358400 bytes = 1433600 512-byte blocks) actually
uses 1433608 blocks in the (ext3) filesystem.
Try "stat dummy-file" to see a little more data:
File: `dummy-file'
Size: 734003200 Blocks: 1433608 IO Block: 4096 regular file
Device: fd00h/64768d Inode: 97819 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 501/ ryniker) Gid: ( 501/ ryniker)
Access: 2010-04-06 14:45:02.284645705 -0400
Modify: 2010-04-06 14:45:04.939645516 -0400
Change: 2010-04-06 14:45:04.939645516 -0400
Generally, an 80-minute, 700 MB CD actually holds 737,280,000 bytes, a
little more than 703 MB. However, there is overhead for the CD
filesystem (likely different from the ext3 data displayed above by the
stat command). This overhead is included as part of the data in a disc
image (.iso) file. One might think a disc_image.iso file no larger than
737280000 bytes is OK, but there are other format requirements that
reduce the effective capacity of a 737280000 byte CD to 736970752 bytes
(reported by the wodim command). Still, that is about 702.83 MB.
Therefore, the "test" should be pretty accurate if it compares the length
reported by "ls -l disc_image.iso" with 736970752. Larger than that, the
disc image will not fit on a "700 MB" CD without extraordinary measures,
such as the wodim -overburn parameter, which probably invites device and
media problems.