First time poster and a very new Unix user, so I'll just pre-apologize for stupid questions now.

Does anybody know of a good RAID 1 hard drive backup that is Unix friendly? I want to avoid any hardcore programming. Can you recommend both NAS and non-NAS options? I need to do nightly backups from a Unix data server running SAMBA/SWAT that currently has ~300 of 420 GB used split between public and user folders. This is for an office and involves sensitive data so I need a safe and secure option.

This is what I was able to find online that seems to fit what I'm looking for:
Buffalo Technology TeraStation Duo TS-WX2.0TL/R1 2x1 TB 368.98

Synology DiskStation DS211 21002x1 TB550.99

Netgear ReadyNAS Duo 2-Bay RND2210 2x1 TB 393.6
Data Dock II DDQ-2000 2x1 TB 269.95
Do any of the above make sense? From what I can tell, only the Netgear is out of the box Unix friendly; the tech guys at Fantom couldn't tell me whether the data dock II was or not. Can you recommend any of these or other models? I don't really think I need the NAS option and it seems you pay considerably more for that. Should I be looking at an entirely different type of data storage? (Cloud storage is not an option)

In the meantime, while I figure this out, my boss wants me to backup the data asap. I was thinking about getting a consumer grade 500 GB or 1 TB external with an ethernet port and simply manually backing up the data via windows. I was thinking this would provide a good stop gap and, once the RAID 1 is setup, could simply be manually backed up weekly and provide essentially an additional disk to the RAID 1 array.

For this I was looking between these two:
Iomega Home Media 34337 1 TB 99.99
Buffalo LS-CH1.0TL 1 TB 99.99
Any help is greatly appreciated. Thank you.

Define "UNIX-friendly". Which UNIX? Furthermore, what's your architecture and system? What kind of disks do you want to use?

A good stopgap a USB or ethernet drive would be, as any backup is better than no backup. However Windows has no respect for UNIX permissions so just blindly copying files could result in much hair-pull later. You could use the udpcast utility and do something like this:

To Corona688By “UNIX-friendly” I meant that I am ideally looking for something that is out of the box compatible with Unix in order to minimize additional coding and therefore potential problems and headaches. From what I understand, in some cases, the internal cards that coordinate the RAID 1 aren’t necessarily designed to work with a Unix system. I would like to avoid those.

To be honest, I don’t know what type of Unix I am using, or what the architecture and system is, where would I find that information? (I just started this job and am essentially on my own technical wise; nobody even told me the server was on site in a closet for about a week and a half.) I do have access as the su if that can help me find this info.

What do you mean types of disks? Speed, size, company?

I think I get your point about copying from Unix to Windows. I basically need to format the drive first, and send the data over as one large file that can be restored later if needed, which would ensure individual users permissions?

Reliability is key, I will look into 3ware.

Thank you so much for your help and suggestions, I really appreciate it.

By “UNIX-friendly” I meant that I am ideally looking for something that is out of the box compatible with Unix in order to minimize additional coding and therefore potential problems and headaches.

Avoid software RAID then. We use that and it works decently well but took a lot of frustration to get going.

A hardware RAID on the other hand, can present multiple disks to the operating system like a single hard drive. Configuring which drives are part of an array often becomes an extended CMOS setting completely independent of the installed OS: you might get a 'press ESC to configure drives' message on boot before the OS actually loads. Assuming your server is a PC that is.

Quote:

From what I understand, in some cases, the internal cards that coordinate the RAID 1 aren’t necessarily designed to work with a Unix system. I would like to avoid those.

It's not so much that they're not designed to work with UNIX, as much as they may not have bothered writing UNIX device drivers. This may not be as important as it used to be (for PC hardware, anyway) since most disk controllers are AHCI-compliant these days and work fine with a generic driver.

Quote:

To be honest, I don’t know what type of Unix I am using, or what the architecture and system is, where would I find that information?

That's a pretty important question... "UNIX" is a completely generic term; Your wireless router might be running a kind of UNIX. So do most supercomputers. Obviously you can't run software from one on the other or fit hardware from one into the other. In a shell, try uname and uname -a to find out what your system is.

That alone won't tell you what kind of card slots this server has and which, if any, are free, so you might need to take a look at the hardware itself too.

I think I get your point about copying from Unix to Windows. I basically need to format the drive first, and send the data over as one large file that can be restored later if needed, which would ensure individual users permissions?

Exactly. The permissions, timestamps, users, and everything else all get bundled up along with the files when you make a tar with -p. Formatting it in NTFS is necessary for Windows if the drive came with FAT, because FAT can't hold files larger than 4 gigabytes.

Quote:

Reliability is key, I will look into 3ware.

Good deal. We tried to go cheap and tried a jmicron controller, which caused some (fortunately recoverable) data corruption. Never again.

Also: A RAID isn't exactly a backup. It's tempting since it's automatic and improves your speed too. It somewhat protects you from a single-disk failure -- that's all. (And single-disk failiures will occur more often because you're running more disks.) Any other kind of problem -- an out-of-control program, dying disk controller, murderous power supply, fire, lightning strike, utility company, theft, volcano -- are still quite capable of swallowing your data entire. A trustworthy backup is when you make a copy and mail it somewhere else.

The decision was made to purchase a cheaper 1 TB external hard drive as a stop-gap measure to make sure the data is backed up before moving ahead with setting up an automatic backup to a dedicated raid 1 array. We purchased a Buffalo Linkstation Live LS-CHL.

The drive can be formatted to to FAT, NTFS, XFS, and HFS+. The server I want to backup is an NTFS file system. I believe the drive comes XFS standard. Should I format the drive to NTFS for compatibility? or is that a non-issue.

The user manual for the drive states the disadvantages of NTFS as:
1) Read-only from the LinkStation or a Mac.
2) Not suitable for backup from the LinkStation.

The relevant disadvantage of XFS is:
You cannot read data by directly connecting to a PC.

Once that is determined, what are the steps I need to take to make a backup .tar of the server files based upon the code you provided earlier?

Does the Unix command simply go in the command line with root access? I assume I change the /path/to/files/i/want/to/backup to the path relevant to my server? What if I simply want to copy all the files on the drive? Do I designate the name of the backup file to be sent before the udp-sender or is built in to be named when received?

On the Windows end, where is that command entered? The command prompt? Can I name the file to be anything I like (probably something along the lines of backupmmddyy.tar or are there restrictions?

Sorry for, what I am sure, are elementary, if not asinine, questions; I really appreciate the responses.

The decision was made to purchase a cheaper 1 TB external hard drive as a stop-gap measure to make sure the data is backed up before moving ahead with setting up an automatic backup to a dedicated raid 1 array. We purchased a Buffalo Linkstation Live LS-CHL.

The drive can be formatted to to FAT, NTFS, XFS, and HFS+.

All right so far.

Quote:

The server I want to backup is an NTFS file system.

??? I thought you were running Linux!

Quote:

I believe the drive comes XFS standard. Should I format the drive to NTFS for compatibility? or is that a non-issue.

If you intended to plug the hard drive into anything directly, Windows wouldn't understand XFS. But that should be a non-issue for network storage.

Quote:

The user manual for the drive states the disadvantages of NTFS as:
1) Read-only from the LinkStation or a Mac.
2) Not suitable for backup from the LinkStation.

I'm not sure why it says this.

Quote:

The relevant disadvantage of XFS is:
You cannot read data by directly connecting to a PC.

True for Windows, but XFS isn't completely alien -- we use it here at work for our Linux file server. Linux can, if it's configured to do so.

Quote:

Once that is determined, what are the steps I need to take to make a backup .tar of the server files based upon the code you provided earlier?

I don't have a Buffalo Linkstation Live LS-CHL so I can't tell you how you'd be able to connect to it, but once you do, you open a DOS prompt, change to the drive you attached your NAS as, and run the command. You'll need to put udpcast on the same drive or in your PATH. You can download a windows version of udp-receiver from UDP Cast

Quote:

Does the Unix command simply go in the command line with root access?

Well, you need to install udpcast first of course. And you can run it under any user with sufficient access privileges to get at all the files in question.

Quote:

I assume I change the /path/to/files/i/want/to/backup to the path relevant to my server?

Yes.

Quote:

What if I simply want to copy all the files on the drive?

All the files on which drive? Linux doesn't have a c: or d: like Windows, all your partitions are accessed through the same file tree. Some folders chosen by /etc/fstab become partitions -- files inside them reside on that partition.

You can't just do a blind copy of everything while the server's running. There's things that shouldn't be copied while in use, and lots of things it wouldn't make sense to bother copying anyway.

If you really want to do a true, blind copy of the server that you could copy back into a new drive and boot without it knowing the difference, you shouldn't do so while the server's operating, you should boot from a livecd and do so with minimal effect on the system itself. But if you don't actually know how to use Linux yet, your options are very limited.

Okay so here is the setup as best I know it. We have a unix terminal running Linux (I logged in and used the uname command to check) that is used as a data server. Users in the office, mostly on pcs but some on macs, can access the drive (through windows) by mapping a network drive under Tools in My Computer and signing in as a registered user. When one does that, the details section on the left info bar lists the name and the physical address, then "Network Drive", "File System: NTFS", and then the free space and total size. I can also access the network through Samba/SWAT, SecureCRT, and the physical terminal itself.

By doing, do you mean, what is it used for? If so, the department uses it to store research data. From what I understand, most users use it to backup data from their computers, but there may be some users who save data primarily or only to the server for personal or security reasons. The server has joint shared space, where anybody has read and copy privileges but only the author of a file has edit/delete rights. In addition, each registered user should have a personal space that only they see. To be quite honest I don't know much more beyond that, nor do I think does anyone else at this point. I quite accidentally stumbled onto this problem looking for a fix for something else (trying to make a public folder on the shared space that gave all users full privileges of any files placed there) and contacted a number of current and former employees to figure out what had been done in the past in terms of backups, and that appears to be nothing.

Quote:

It may be simpler, and faster to plug the drive into the server direct, mount it, and create the tarball on it that way. Assuming your Linux server can understand XFS.

So that would be simply plugging the drive into the server via USB, mounting (with code) and creating the tarball (more code)? How can I tell if my Linux server can understand XFS or not? The uname -a command gave more info, would that help? How long approximately would that take? and vs. say doing it through the udpcast as you suggested above?

When I do the backup do I need to prevent other activity on the server?