I was disappointed to see (on the Page 1 chart) that the ST2000DL003 has only 64 bytes of RAM Cache. Seems a giant step backwards.

The SATA III interface brings with it more than just unusable speed, it is hooked into faster AHCI and 4K access modes. And that speed may well be usable by the 64 Megabyte cache, if only briefly.

The WD black sort of comes and goes in the charts. Was that the SATA II or the SATA III version?EDIT: OK, I see on page 6 it's a WD2001FASS , a SATA II model which seems to be giving way to the SATA III version but shares its 64MB cache, unlike the 32MB WDBAAZ0020HNC-NRSN. When did Western Digital decide that SATA II users need LESS cache?

Quick comment; on the first page you wrote: "The drive also has a new casing with a hollowed out portion around the motor supported with ribs [spiraling] out in a circle." The correct geometric term is "[radiating] out from the [hub]." Spiral is like the support struts on a Scythe Slip Stream fan -- they are curved.

Hmm, I am about to build a bigger NAS and I will be using my Current WD EARS 2TB and couple of other 2TB disks. I can't decide which ones, any ideas?WD have problems with faulty 4K advanced format, load calibration count and timelimited error recovery.Seagate have had problems with their cctl-settings that don't stick and a couple of years ago, a huge set of bricked drives.Samsung have had problems with bad firmwares and their cctl-settings don't stick.Hitachi's disks are loud as hell and have been bought by WD so they might go down that path.

Morgue, you didn't provide very much information about your setup and what you want to do.You say that you have a single drive so you must not be using RAID. If you are not familiar with RAID already, perhaps you shouldn't be trying to make your setup more complicated. Often times, people who think they need RAID would fare better without.You say you have a NAS. So I imagine your box runs Linux, BSD or something. These operating systems are not troubled so much by the drives' settings. It's mainly a problem for hardware RAID. I have not bought a WD drive for a long time but I have had no issues with Samsungs, Seagates and Hitachis in software RAID so far (other than drive failure of course). I didn't even look at their TLER/CCTL settings. I don't think I have used recent Hitachi models in RAID however (I mainly use their laptop drives as system drives). I have not used a 4K model in RAID yet.TLER/CCTL settings only affect drives when they encounter a problem. Keep in mind that drives which mishave have a higher chance of failure than others. You may want to replace such drives anyway.

I basically want to know whether I can still use this HDD (only for storing data, i.e. no OS installed) on a Windows 7 system, even though I had previously used this HDD (again -only for storing data, i.e. no OS installed) on a Win XP system.

I basically want to know whether I can still use this HDD (only for storing data, i.e. no OS installed) on a Windows 7 system, even though I had previously used this HDD (again -only for storing data, i.e. no OS installed) on a Win XP system.

what youre probably worrying about is partition alignment in windows xp with an advanced format drive. you'll need to follow the manufacturers directions to make sure your drive is aligned properly under windows xp.

You just have to worry about setting up the proper alignment. The best/easiest way to do it is if you have a Vista/Win7 machine, and you might be able to do it with some of the latest distros of Linux.

Just pop the drive in the machine, format to NTFS, take the drive out and put it in the XP machine. Those OS will properly format an advanced format drive. You are generally going to want to do this if this is going to be a system drive.

If it isn't a system drive for XP, a lot of manufacturers (okay, at least Samsung) has a formating utility for their advanced format drives that you can use to format the drive under XP.

I just picked up a 2nd Samsung F4eg 2tb drive, this one for my file server that is running XP. Just poped it in my Win7 box, formated, dropped in the winXP box and running just dandy.

Course I also updated the firmware on the drive at the same time, which was a little more involved (thank you so much Samsung for not including a DOS boot environment with your firmware update, making us set it up on our own. *sigh*). The formating and transplating from machines maybe took me 6 minutes.

You just have to worry about setting up the proper alignment. The best/easiest way to do it is if you have a Vista/Win7 machine, and you might be able to do it with some of the latest distros of Linux.

Just pop the drive in the machine, format to NTFS, take the drive out and put it in the XP machine. Those OS will properly format an advanced format drive. You are generally going to want to do this if this is going to be a system drive.

If it isn't a system drive for XP, a lot of manufacturers (okay, at least Samsung) has a formating utility for their advanced format drives that you can use to format the drive under XP.

I just picked up a 2nd Samsung F4eg 2tb drive, this one for my file server that is running XP. Just poped it in my Win7 box, formated, dropped in the winXP box and running just dandy.

Course I also updated the firmware on the drive at the same time, which was a little more involved (thank you so much Samsung for not including a DOS boot environment with your firmware update, making us set it up on our own. *sigh*). The formating and transplating from machines maybe took me 6 minutes.

Thanks guys.

But the issue with this Seagate Green 2TB drive is that it does not use any software/formatting utility for their smart alignment. This HDD basically does all the proper alignment for you.

I just want to use this HDD for storing data only (No Operating System installed) on a Win XP computer. Then physically transfer this HDD into a new Win7 computer in the future. I also do not partition my drives.

My question is whether all my data will remain and be shown in the new Win7 computer (without doing any additional partitioning or formating)??????

Partition alignment has nothing whatsoever to do with readability of the data. Whether it's correctly aligned or not, automatically or otherwise, you will not have any issues accessing data on the drive.

I am about to buy a 2TB drive after the third consecutive failure of WD's 2TB drives (first WD20EARDS 3platter drive came up with bad sectors within a day, the second -also 3plattered- lasted for 2-3 months and now is full of weak-bad sectors, can't even load the OS with it attached, and the WD20EADS has 108 bad sectors...so much for Western Digital for me, 8 years with Seagate drives this never happened, and I have 4 of them now connected to my system)...

...the WD20EARS truly was a silent miracle, and I am looking to find the best option between Seagate and Samsung......I downloaded the mp3 files for these drives and contrary to the review, I found the Samsung's growl much more disturbing and distinct than Seagate's "tickier" noise, even if the latter's does sound to be just a tad louder...

Do you have any input on this Seagate Green 2TB drive?

And a question about saving my data from the WD20EARS...- Since it has weak-bad sectors, is it preferable to run a hdd regeneration test on it before attempting to copy the files to a new HDD and which tool would you choose to do that under DOS (the copying) ?

Too bad about your nasty experiences with WD Greens.... but in reality you will find horror stories for every drive maker out there. Bad batches seem unavoidable... and user-specific conditions that accelerate HDD wear is also not unheard of.

Re - Seagate Green 2TB noise -- it's not bad. IMO, the differences among all the current 5400~5900 rpm drives are relatively small, and the differences can be made inaudible by using elastic suspension. If you are hard or grommet mounting, then the measured and reported acoustic differences in our reviews remain relevant & audible to the aurally sensitive, even if small.

I've had trouble with Seagate in the last few years. Samsung happens to be the brand I've had the least trouble with so far I think. I don't put great significance on this.

Pierre wrote:

- Since it has weak-bad sectors, is it preferable to run a hdd regeneration test on it before attempting to copy the files to a new HDD and which tool would you choose to do that under DOS (the copying) ?

DOS? Use a Live CD (can be booted from USB memory stick as well). And don't try to copy the files!There are some commercial utilities claiming to be able to do a better job but I'd use ddrescue first.

Too bad about your nasty experiences with WD Greens.... but in reality you will find horror stories for every drive maker out there. Bad batches seem unavoidable... and user-specific conditions that accelerate HDD wear is also not unheard of.

My WD20EARS were produced in Thailand, the first I believe in August, the second in September 5, I don't know if any consistent failure rate for these dates have been recorded.Regarding the user-specific conditions, I don't see how they could be at fault...of course there are better PSUs than the TX750 Corsair, but it's on the quality side for sure...the system is not very power hungry and has been tested thoroughly to be ultra-stable...a UPS is there to support just the tower and the LCD screen...Moreover, if there were system-specific causes for their failure, one could expect the other drives in the system to be affected as well...in short, there are no such conditions...

Of course I am aware that anyone can get unlucky...but after so many years with Seagate and minimal disturbances (hdd firmware upgrade), I'll take my chances with it instead...

MikeC wrote:

Re - Seagate Green 2TB noise -- it's not bad. IMO, the differences among all the current 5400~5900 rpm drives are relatively small, and the differences can be made inaudible by using elastic suspension. If you are hard or grommet mounting, then the measured and reported acoustic differences in our reviews remain relevant & audible to the aurally sensitive, even if small.

Unfortunately, I have not taken up elastic suspension yet (and I don't really favor them), so the differences are bound to remain evident...my comment above referred to the "quality" of the noise emitted from each one...although Seagate's may be higher in volume, I find it much less bothering than the constant growling of the Samsung one...

HFat wrote:

I've had trouble with Seagate in the last few years. Samsung happens to be the brand I've had the least trouble with so far I think. I don't put great significance on this.

I've never owned a Samsung, but this is not an obstacle in any sense....

HFat wrote:

Pierre wrote:

- Since it has weak-bad sectors, is it preferable to run a hdd regeneration test on it before attempting to copy the files to a new HDD and which tool would you choose to do that under DOS (the copying) ?

DOS? Use a Live CD (can be booted from USB memory stick as well). And don't try to copy the files!There are some commercial utilities claiming to be able to do a better job but I'd use ddrescue first.

You mean a linux live cd? ddrescue works to safely recover the files by some special access mode, does it run comparison tests between the files on the original drive and the one restored?I had used Ontrack's EasyRecovery long time ago on a problematic Maxtor drive (before being acquired by Seagate), is that one of those kind of programs you were referring to?

Would it be better if I ran a Hdd regeneration on the damaged drive before attempting to recover data?

Yeah. You don't need that if you're already running a free operating system on the box (from some other drive) of course.

Pierre wrote:

ddrescue works to safely recover the files by some special access mode, does it run comparison tests between the files on the original drive and the one restored?

Like I said, attempting to recover the files directly seems to be bad idea if the drive is developing bad sectors.What ddrescue does is to image the drive. It can be really slow if the drive has issues because it does not fail when it encounters a problem, it retries trouble spots and so on. As far as simple, free tools go, this is the one who'll give you the best image so far as I know. In some cases, you'll even get a perfect image.

You can then try whatever software you fancy to recover your files from the image. If you make a backup copy of the image first or if you keep the software from writing to it, you can mess things up and try again many times without doing any further damage to the failing drive.

Pierre wrote:

Would it be better if I ran a Hdd regeneration on the damaged drive before attempting to recover data?

I don't know exactly what you mean but it sounds like it could make things worse. Once you've got the best image you can get out of the drive, you don't need the drive anymore so you can try whatever fancy tricks you like on it. But only once you've got the image!

Would it be better if I ran a Hdd regeneration on the damaged drive before attempting to recover data?

I don't know exactly what you mean but it sounds like it could make things worse. Once you've got the best image you can get out of the drive, you don't need the drive anymore so you can try whatever fancy tricks you like on it. But only once you've got the image!

Hdd regeneration is a term used for the process of "refreshing" the data on a hard disk, by which the data are read, written and re-read and if any weak or bad sectors are found, the data is reallocated...Such is the process available in programs like HDD Sentinel and Hdd Regenerator..this is what the latter's help file reads on the issue:

Code:

Reads stored data from each blocks, writes back the contents and finally reads the information and compare with original contents. By the extensive test, an additional write cycle is used before writing back the contents to improve the efficiency of the error correction (drive regeneration).

The operation is usually safe for the stored information but data loss may occur if the system is not stable and/or upon power failure, overclocking, memory/power supply/cable problems and other factors.

The test can be used to refresh the data area of the storage device, without the need of complete erase but it is still recommended to backup important data before this test.

The test measures transfer time for all blocks to reveal which areas of the surface are slower. As the block is slower, the associated color is darker.

The reason why I am proposing this is to make sure the data is readable and if not to attempt to recover it and copy the contents to another area of the drive which does not suffer from such issues...so that the data will be readable when I attempt to copy them after the regeneration test...I must also point out I use a great little program called Teracopy as Windows default, which copies and compares the hashes of the files copied in the destination to those of the source files to make sure you get an exact copy...

HFat wrote:

Pierre wrote:

You mean a linux live cd?

Yeah. You don't need that if you're already running a free operating system on the box (from some other drive) of course.

Pierre wrote:

ddrescue works to safely recover the files by some special access mode, does it run comparison tests between the files on the original drive and the one restored?

Like I said, attempting to recover the files directly seems to be bad idea if the drive is developing bad sectors.What ddrescue does is to image the drive. It can be really slow if the drive has issues because it does not fail when it encounters a problem, it retries trouble spots and so on. As far as simple, free tools go, this is the one who'll give you the best image so far as I know. In some cases, you'll even get a perfect image.

You can then try whatever software you fancy to recover your files from the image. If you make a backup copy of the image first or if you keep the software from writing to it, you can mess things up and try again many times without doing any further damage to the failing drive.

I guess the program/command you are proposing does something similar or comparable to an extent to the regeneration process above and it sounds quite effective and safe...

But I see an obstacle to this process...the drive I want to recover data from is almost completely full like 1.5TB populated out of 1.89TB available...If I need to first make an entire image of the contents and then recover the files from it, I need two spare empty hdds of same capacity...the first to save the image of the contents and the second to recover the data from the image...This is really problematic cause I don't have the money to make that happen...I was considering buying just one to recover the files from one of the drives, then getting an RMA for it and using the the new drive to copy the contents of the second failed...then the second drive would be RMAed and I could sell the replacement...

Hdd regeneration is a term used for the process of "refreshing" the data on a hard disk, by which the data are read, written and re-read and if any weak or bad sectors are found, the data is reallocated...

Look, I'm no expert. But I don't like the idea of writing to a failing drive. Once you have a decent image, sure: why not try it? But it doesn't sound safe.

Pierre wrote:

I must also point out I use a great little program called Teracopy as Windows default, which copies and compares the hashes of the files copied in the destination to those of the source files to make sure you get an exact copy...

Unfortunately it's not that simple. I've had bad hardware which would produce repeatable errors. In that case the hashes would check out and you'd get a bad copy without knowing any better. The proper way to do this is to write a hash (or similar) when you write the data, not when you read from it. This is what archivers do, what ZFS does and so on.

Pierre wrote:

This is really problematic cause I don't have the money to make that happen...

Then you've got to do what you've got to do.But I'd say you also don't have the money for commercial tools which may or may not be snake oil. Having backups should be the first investment you make. And an extra drive you purchase today could be where you store your future backups. Sometimes drives fail without warning and all the data is lost with no recourse. If you really don't have the money for backups, then you don't have money for anything related to your data.

Do you know anyone who could lend you some spare capacity, like a "pirate" or someone who manages the computers of a school or a small business? I don't have such large drives for my personal use but on a temporary basis I could discreetly spare 1.5T of capacity for a comrade.

Look, I'm no expert. But I don't like the idea of writing to a failing drive. Once you have a decent image, sure: why not try it? But it doesn't sound safe.

Well, you are right in one respect in this, but it's not exactly like a failing - unstable CPU likely to produce errors in every operation...it may have weak and bad sectors, and of course there is the danger of those increasing, but the operation I am describing is about distinguishing weak sectors from proper ones...even if the sector is slow to read, it may still reallocate the data to protect it...

Quote:

Unfortunately it's not that simple. I've had bad hardware which would produce repeatable errors. In that case the hashes would check out and you'd get a bad copy without knowing any better. The proper way to do this is to write a hash (or similar) when you write the data, not when you read from it. This is what archivers do, what ZFS does and so on.

Since I do not use a ZFS file system, the solution you are proposing is not better in this respect from the one I was considering before...that is, the data could still have been corrupted when ddrescue makes an image out of them...so ddrescue does not protect me or override this, does it?

Quote:

But I'd say you also don't have the money for commercial tools which may or may not be snake oil. Having backups should be the first investment you make. And an extra drive you purchase today could be where you store your future backups. Sometimes drives fail without warning and all the data is lost with no recourse. If you really don't have the money for backups, then you don't have money for anything related to your data.

Do you know anyone who could lend you some spare capacity, like a "pirate" or someone who manages the computers of a school or a small business? I don't have such large drives for my personal use but on a temporary basis I could discreetly spare 1.5T of capacity for a comrade.

Good luck!

I have backups of the most crucial data that I cannot afford under any circumstance to lose, my essays and my material...that's the only midway I can keep between not keeping any backup at all or not storing any data in the first place for lack of money to duplicate each single drive that I keep...Your absolute reasoning would make sense if we were talking about a corporate environment....there it's either you backup everything or you take a risk you may not afford to take, so that the cost of the extra drives is less than the cost of losing the actual data...Beyond that corporate realm, that logic is either a luxury or pragmatologically false for being unattainable...

Concerning the commercial tools, I own a couple of them, not the most expensive though...

Your suggestions for the process of salvaging the data are valuable and sound...I'll see what I can do with them with the resources available......sadly there is no other data-collector like me in the immediate circle (we are talking about three 2TB drives, two 1.5TB drives and one 1TB drive for data storage) or people with that kind of drive available or usable (i.e. either it is constantly used or it does not hold enough free space for the process)...and as much as I value my data personally (have a love for documentaries), I would not consider it a serious enough concern to start seeking service from outside the friendly circle for it...not that much objectively important....If I couldn't even spare the one extra 2TB drive, which allows me a way out of the situation by the process I explained in my previous message, then I would consider it...

even if the sector is slow to read, it may still reallocate the data to protect it...

Yes, and not knowing what's wrong with the drive, the reallocation is precisely what I'd be concerned about.

Pierre wrote:

the data could still have been corrupted when ddrescue makes an image out of them...so ddrescue does not protect me or override this, does it?

Of course not. I was only saying no software can protect you against such errors once the data has been written. Protection comes with planning. It doesn't have to as complicated as ZFS: the lowly md5sum does a fine job on files which are modified infrequently.

Pierre wrote:

(have a love for documentaries)

Oh, so it's not really your data. You only have one of the many copies out there. Then yeah, having your own backups might be unnecessary especially if you can use Linus Torvalds' approach to cloud backups.

Yes, and not knowing what's wrong with the drive, the reallocation is precisely what I'd be concerned about.

Reallocation can occur even without the use of particular program which may initiate it in case of very slow reads, but I think leaving the data on weak/unstable sectors is more dangerous in the first place...

Quote:

Of course not. I was only saying no software can protect you against such errors once the data has been written. Protection comes with planning. It doesn't have to as complicated as ZFS: the lowly md5sum does a fine job on files which are modified infrequently.

So is there a program that can keep track of that? It wouldn't make sense to do it individually

Quote:

Oh, so it's not really your data. You only have one of the many copies out there. Then yeah, having your own backups might be unnecessary especially if you can use Linus Torvalds' approach to cloud backups.

Well, docs take up a substantial part of the hdd space, what sort of work could I be doing to need so many TerraBytes for my personal computer if it all was "my data"? But there is also a large amount of archives collected over the years...e.g. just for the case of a particular set of events, the war in Libya the last two months, I have collected about 15GBs of selected videos (downloaded and captured/encoded from TV), images, articles, webpage screenshots etc...and this done extensively over a number of subjects...But maybe this also does not fit the "my data criteria"...

So is there a program that can keep track of that? It wouldn't make sense to do it individually

There probably are several.I wouldn't bother because md5sum makes it easy to generate hashes for a bunch of files and to compare hashes yourself with standard tools like the find program. Keep it simple! This way you can verify the hashes from any OS (except Windows installs which have not yet been upgraded with Cygwin/MSYS/whatever).

If you really need such a program and can't find a good one, I've half-jokingly recommended using a bittorrent app. They allow you to make a single hash file for a whole directory, tell you exactly where in your big files you've got an error... and double nicely as a network/cloud backup application.

Alternatively you could do your backups with a program that automatically generates hashes such as duplicity. If you're not doing backups, you have bigger worries than hash management!

Pierre wrote:

what sort of work could I be doing to need so many TerraBytes for my personal computer if it all was "my data"?

There are so many reasons to have loads of data...The most obvious cause would be if you recorded video. Even if you released most of your work to acquaintances or even to the public, you'd probably have lots of stuff that didn't make the edit you'd want to keep on hand.

Pierre wrote:

But there is also a large amount of archives collected over the years...e.g. just for the case of a particular set of events, the war in Libya the last two months, I have collected about 15GBs of selected videos (downloaded and captured/encoded from TV), images, articles, webpage screenshots etc...and this done extensively over a number of subjects...But maybe this also does not fit the "my data criteria"...

I don't want to get into semantics and intellectual property issues. What matters is: can you easily download or rip the stuff again?The 21st century version of press clipping you've done definitely ought to be backed up regardless of who legally owns the stuff. For documentaries you have on optical discs or which are easy to download, a backup is not absolutely necessary (I'd still do it but you might call that a luxury).

Assuming your work has some value, if you somehow are really lacking the resources to back up your clippings properly (that means having more than two copies, no matter how impratical you think that is!), you should get help.You could edit/index/package your stuff in a way that makes it attractive and usable by other people and share it. This assumes you've got acquaintances who could help get the word out and the distribution going. Otherwise it's still a good long-term goal but you'd need another approach in the meantime.You could get in touch with an organization which collects this kind of stuff and which can afford a few drives. From their point of view, you wouldn't be someone requesting a service but a potential volunteer contributor to their archives.You could simply request donations of old hardware from acquaintances or even the public: http://www.freecycle.org/group/GR/GreeceIf you can spare some capacity but not dedicated backup drives, you could use a P2P cloud backup scheme such as Wuala (which I haven't personally tested).And finally, maybe you should try to meet fellow data collectors in your area.

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum