Posted
by
CmdrTacoon Monday May 17, 2010 @11:44AM
from the remember-when-20meg-was-infinity dept.

Stoobalou writes "After a few weeks of rumours, Seagate's senior product manager Barbara Craig has confirmed that the company is announcing a 3TB drive later this year, but the move to 3TB of storage space apparently involves a lot more work than simply upping the areal density. The ancient foundations of the PC's three-decade legacy has once again reared its DOS-era head, revealing that many of today's PCs are simply incapable of coping with hard drives that have a larger capacity than 2.1TB."

I ran into that a few years ago when I added a 4TB hardware RAID5 to my Linux server. The partition table that is made by fdisk can't handle it. I was forced to use parted to make an EFI partition table instead. It was a little different but completely doable. Took me about 2 minutes on Google to find a howto.

One other issue with this announcement; why did they bother with 3TB? Should the next step be 4TB? We are counting in binary are we not?

No, we are not. We may count in binary for memory, but it's different for physical hard drives with spinning disks. For these, we count in platters (the actual physical disk(s) spinning in the drive).

Hard drives typically have somewhere between 1 to 4 platters. Drives with more platters exist, but they're less common.
Common platter sizes: 500GB, 375GB, 333G, 250GB

I didn't RTFA (this is slashdot, come on), but I'm guessing what Seagate really did was come out with a 750GB platter, that can be used to produce a 3GB drive with 4 of those platters. You'll probably see the 4TB drive you want when they come out with a 1TB platter.

You need to be able to use larger than 32-bit block addresses. This is possible since the LBA spec uses 48-bit addresses. But the internals of some OSes use only 32-bit block addresses. The solution to this is to use a 64-bit OS. That allows 48-bit addresses (and then some).

Another other problem is that the MBR disk partitioning scheme uses 32-bit block addresses, so you can't partition a disk larger than 2TB. But the answer to this is to use GUID disk partitioning.

Finally there's your BIOS, it probably only supports MBR and 32-bit LBA. GUID disk partitioning supports making your disk look like an MBR disk so you can boot off it. You'll have to boot off a partition that starts within the first 2TB of the disk, but other than that you should be okay. Just make sure to never use any tools that think your disk is an MBR disk when you are repartitioning it or otherwise accessing it directly.

By "long ago", you mean in their very latest OS release in late 2009? Despite their opportunity several years ago to make a clean break from 32-bit when they switched to Intel, they didn't do that. They supported your legacy hardware and software for years. Linux and BSD should still work fine on a G4, even though it's about eight years old.

You mean GUID partition Table [wikipedia.org]. It is part of the EFI standard, however you can use a GUID partition with BIOS. Linux and FreeBSD are the only 2 OSs that can Boot from a BIOS/GUID setup.

Windows can boot from a GUID partition table, but only on an EFI motherboard and those aren't exactly falling from the sky right now.

Personally all of my disks are on GUID because my main OSes are Linux/MacOS X, other than my OpenSolaris server which just uses ZFS on the devices.

I'm not familiar with LVM. Mind giving a quick "noob's guide on what you should know"?

Its a "pulling oneself up by ones bootstraps" thing. He was trying to make a "funny". So, on my AMD64 from like five years ago, running linux 2.6 from like five years ago, I can theoretically provision a 8 Exabyte LVM logical volume (LV) which would seem to mean a 4TB LV is small potatoes. But LVs live inside PVs (simplification). All you gotta do, is use fdisk to create a honking big 4TB physical volume partition with type 8E. Oh wait, the problem was fdisk doesn't work. A real knee slapper. But not as funny as trying to use NTFS or windows on it.

Actually the real answer is you physically reconfigure your disks (how?) to make two little 2TB PVs then combine them into a VG (volume group) then put the 4 TB LV inside the VG. Then next year, when your collection of pr0n is like 3.9 TB, you buy a new 10 TB drive, add it to the VG, and remove the old drives from the VG. How you expand the LV to use the new space is your problem but it probably involves our old pal ext2resize or whatever appropriate.

Think of LVM as a layer of abstraction between physical devices (your disks and/or arrays) and the logical devices that you can then put a filesystem on. LVM lets you break your physical disks into bite-sized pieces called extents (say, 32MB each, but it's configurable), and then you can add/remove extents to a logical disk device. If you have a filesystem that supports it, you can then grow/shrink the filesystem to use the space you've allocated.

And if their sectors don't fall on a physical boundary, then you've just used 8KB on the physical drive.

100% false.

You still only use 4K on the physical drive. You just have to read & write 8K at a time because the misaligned 4K filesystem block straddles two physical blocks. But since filesystem blocks are packed sequentially there is no wasted space, they are just all misaligned by the same offset.

Not a bad noob reference. Its funny in a quaint way how it squeals about root LVM... that hasn't been a serious problem since like the Clinton administration. I know I've got vanilla out of the box Debian boxes from like half a decade ago with root LVM, just not an issue.

It does suck in the "Why" section, in that all it does is list "What" it can do.

Assuming the prerequisite of understanding what Xen/KVM/VMware does, think about how it wedges a layer in between the OS and the CPU so you can pool, combine, snapshot, transfer, share, shrink, expand, what the OS sees as "its" CPU. LVM does the same thing, except living between the partition table and the physical hardware. The analogy is LVM is to disks, like KVM/Xen/Vmware is to CPUs. I kind of enjoy this paragraph, I think its the best description of LVM I've ever read, and not just because I wrote it...

15 years ago when you were paying $500 for a 320MB hard drive, did you ever anticipate your home PC would someday have a capacity of multiple terabytes? Could you imagine that a laptop would ever be able to hold over a terabyte?

Yes, yes I did. I notice your very high six digit UID. Now when my father's employer paid something like $20K for a 5 meg DASD the size of a filing cabinet when I was a little kid, I never imagined I could buy my own personal "winchester disc" for less than "a buck a meg" but I finally did that on sale around 1990-ish timeframe. At that point you kind of get the idea that increasing capacity is a way of life. And its been that way for decades.

Not only can you get the latest Safari 3 (3.2.3) to run on Tiger and it's only a year old, you can get the latest and greatest Safari 4.0.5 on Tiger.
Links:
Safari 4.0.5 [apple.com] Safari 3.2.3 for Tiger [apple.com]

The Bios doesn't know anything about partitioning. It only knows it needs to read sector 0 of a disk into memory at 0x7c00 and jump to it. If the disk is MBR partitioned that's the MBR and the code there knows how to scan the partition table, and load the boot sector of the partition table using INT 13h. If it's not partitioned sector 0 is the boot sector of the partition table already and will use INT 13h to load the OS boot loader.

GPT is different because sector 0 contains a "Protective MBR" that just reserves the whole disk. That doesn't contain any code - EFI Bioses need to read boot code from a special FAT formatted partition (Macs apparently use HFS+ instead [ntlworld.com]). EFI Bioses offer a much more complicated API than the Bios, which is good in some ways (flexibility) and bad in others (more chance of bugs).

But non partitioned disks have always been supported. In fact floppy disks are always non partitioned.

Actually it's a shame that sector 0 of an GPT disk doesn't contain code to load a boot manager that understands GPT to allow booting from a GPT disk with an old fashioned Bios. Or that some way for old style Bioses to boot from disks with a partition table with 64 bit LBAs in wasn't developed - MBR partitioning only has space for 32 bit LBAs. Which means no support for disks bigger than 2TB.

Snow Leopard is not pure 64 bit OS. By default it has 32 bit kernel on all hardware except the XServe, and it can be booted with 64 bit kernel on some other more recent hardware but not all things work in that configuration.

Because 512 byte sectors allow for less empty space waste than anything larger.

Um, no. The sector size dictates what boundaries the OS has to do reads and writes on. It doesn't dictate how the OS uses the space. 4k sectors means that to read or write an aligned 4k filesystem block, the OS has to do one I/O operation instead of eight; and that if it wants to write a 512-byte block, it has to do a read-modify-write cycle.

How efficiently small files get stored is a property of the filesystem, which doesn't even know about the sector size. Common filesystems all use 4k blocks or bigger anyway. Some filesystems store files smaller than 4k efficiently by packing them in with the metadata or dedicating some blocks to store several files per block. Filesystems that do this include, most notably, NTFS, and also some Linux filesystems like ReiserFS and btrfs. Wikipedia calls this block suballocation [wikipedia.org] (don't know if this term is standard). This is totally orthogonal to the sector size.