just brew it! wrote:It is "best" in the sense that it reduces fragmentation to zero in a single pass, and works on any filesystem (even ones that don't support defraggers like ext4).

Eh, Raxco Perfectdisk will bring it to zero in a single pass too, consolidation and all (except for locked system files but that's a given). Faster too since it's unlikely to have to move every single file.

Depending on how full the disk is, it may need to move *some* files multiple times though.

ChronoReverse wrote:Still, unless you're on an XP machine, you don't really have to worry about this nowadays.

Agreed.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

EsotericLord wrote:While this is true, Win 7 will also automatically defrag when the computer is idle should it miss that 1am Wednesday schedule.

How idle is "idle"? I have an HDD-based laptop that is never up Wednesday mornings, and in fact only runs during the weekends typically. Although it is sometimes idle for an hour or two, the level of disk fragmentation seems to be creeping upward (only checked it a couple times out of curiosity but did see 12% recently).

Trigger:

Weekly at 1am every Wednesday of every week, start 1/1/2005

Conditions:

Idle:Start the task only if the computer is idle for: 3 minutes-Wait for idle for: 7 daysStop if the computer ceases to be idle-Restart if the idle state resumes

Power:Start the task only if the computer is on AC powerStop if the computer switches to battery power

Settings:

Run task as soon as possible after a schedule start is missed

"Welcome back my friends to the show that never ends. We're so glad you could attend. Come inside! Come inside!"

Guess that would explain it. Although the laptop does run on AC quite a bit, I tend to hibernate it when not in use. So autodefrag probably only kicks in once a month or so, and may not fully complete when it does.

I do still partition my drives, for the same reasons that the OP raised:

1. Flexibility to avoid backing up low-change data more often than necessary.2. Flexibility to reduce defrag frequency for some partitions because defrags cause incremental and differential backups to grow in size.3. Shorten window for backups and defrags.4. Allow for differing numbers of incremental backups based on partition type.5. Diskeeper "IFAAST" places files on the part of a disk partition that allows best performance, based on usage statistics taken from the file system over time. For example, my VST virtual instrument libraries often have large files, up to 2 GB in size each. These files serve as containers for the compressed WAV or other sound samples within. The container files are only updated when I apply an update to a VST instrument and not in everyday use. Diskeeper IFAAST would typically place these big files at the END of my partition, and allow more frequently updated files to be placed FIRST on the partition, with the freespace in the MIDDLE. This is a great strategy for some volumes with this type of large, never updated data.

With Windows 8, I've transitioned to a somewhat simpler partitioning strategy, but I still have partitions, for the reasons noted above. The system partition will get backed up every 4 to 8 hours, the music recording partition once every 12 hours, the application partition once per day, the MP3 and podcast partition once per week, and the VST instrument partitions (I have more than one, some reside on SSDs) once per month. This works well for me, and it keeps the daily backups from requiring 4 TB drives.

I use Diskeeper and there's a new version out now that supposedly adds a new feature called "instant defrag", which supposedly defrags a volume if some data was written to it and ended up fragmented. I'm doing a trial on it right now.

The new Diskeeper also recognizes SSDs and only sends TRIM to them; doesn't perform a traditional defrag on SSDs.

Full Disclosure: I am not connected to Diskeeper or its company Condusiv in any way except that I've been a satisfied long-time customer.

BIF wrote:I do still partition my drives, for the same reasons that the OP raised:

1. Flexibility to avoid backing up low-change data more often than necessary.2. Flexibility to reduce defrag frequency for some partitions because defrags cause incremental and differential backups to grow in size. <snip>

By definition, incremental/differential backups only back up data that has changed, so #1 and #2 are contradictory. Furthermore, most incremental/differential backups operate on files, not on raw disk blocks; what matters is whether the contents of a file has changed, not where the file's blocks physically reside on disk. So defrags should have no effect on what an incremental backup tool considers to have "changed".

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

One partition per physical device seems simplest. All of that data sorting that you're talking about doing with different partitions could just as well be done with different devices (particularly since some of them are well-suited to SSDs and others are well-suited to slower hard-drives). If not, use the file system. That's what it's for.

And my HD in here for backups is a Storage Space so it keeps itself defragged.

I find the theory that you don't need to defrag manually ludicrous, as I have seen files get into the thousands of fragments which can cause havoc with the app using em. So I too recommend Perfectdisk for HDs. I do recommend turning off its SSD crap, since it makes an open area by fragmenting everything else, which even on an SSD is ridiculous (there's still a very large difference between random and sequential files on SSDs.) Mydefrag also does VERY well on HDs.

(Possible Raxco has changed that since I told them why it's stupid, but I don't know.)

When it comes to SSDs, what I've read elsewhere suggests that defragging one could actually theoretically degrade its performance. SSD performance is largely due to their ability to access data simultaneously from all the NAND chips in the drive - hence why lower-capacity drives in the same product range have lower rated performance. If you defrag in the traditional sense, you'd be organizing your files sequentially so that an entire file was on a single chip, and you'd actually cripple performance as a result. I'm not certain on the technicalities, this is only the gist I've picked up from reading various sources, but it seems that SSDs effectively rely on what would normally be considered "fragmentation".

There often seems to be a lingering sense of annoyance for some people over the idea that they should get out of the habit of defragging when they move to an SSD. I'm not really sure why, but I've never once seen an authoritative source suggest there's any benefit to it, and I've certainly seen more than a couple suggest that it's a very bad idea, and should be avoided entirely.

GrimDanfango wrote:When it comes to SSDs, what I've read elsewhere suggests that defragging one could actually theoretically degrade its performance. SSD performance is largely due to their ability to access data simultaneously from all the NAND chips in the drive - hence why lower-capacity drives in the same product range have lower rated performance. If you defrag in the traditional sense, you'd be organizing your files sequentially so that an entire file was on a single chip, and you'd actually cripple performance as a result. I'm not certain on the technicalities, this is only the gist I've picked up from reading various sources, but it seems that SSDs effectively rely on what would normally be considered "fragmentation".

There often seems to be a lingering sense of annoyance for some people over the idea that they should get out of the habit of defragging when they move to an SSD. I'm not really sure why, but I've never once seen an authoritative source suggest there's any benefit to it, and I've certainly seen more than a couple suggest that it's a very bad idea, and should be avoided entirely.

It is bad for the SSD to write too much, but defragging would only reduce performance notably on one that doesn't do TRIM.

The controller takes care of data placement, and one would assume most algorithms would only tell the OS its sequential if the files are exactly where they're supposed to be for peak performance. I highly doubt anyone would make an SSD controller that completely ignored common sense with regards to filesystems.

GrimDanfango wrote:When it comes to SSDs, what I've read elsewhere suggests that defragging one could actually theoretically degrade its performance. SSD performance is largely due to their ability to access data simultaneously from all the NAND chips in the drive - hence why lower-capacity drives in the same product range have lower rated performance.

This much is true.

GrimDanfango wrote:If you defrag in the traditional sense, you'd be organizing your files sequentially so that an entire file was on a single chip, and you'd actually cripple performance as a result.

This part is incorrect. In order to get the performance boost from multiple chips, the chips are wired up such that sequential reads result in data being fetched from all chips in parallel (very similar to how RAID-0 works). So defragging won't kill performance for the reason you indicate; however, it also isn't likely to help either, and may be detrimental for the other reasons already stated.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

Savyg wrote:I highly doubt anyone would make an SSD controller that completely ignored common sense with regards to filesystems.

Modern file system do not write data in contiguous fashion and haven't for quite some time. This was a prominent reason why in the aftermath of the space shuttle Columbia's destruction we were able to restore its data. As the system aboard the shuttle were so old it still wrote the data in a contiguous fashion.

Ontrack Data Recovery & NASA wrote:We use our extensive knowledge of Operating Systems to target just the areas where data resides, which allows us to avoid damaged areas unless absolutely necessary. Modern OS's tend to scatter the data, but this drive used DOS FAT16, which kept the data contiguous.

"Welcome back my friends to the show that never ends. We're so glad you could attend. Come inside! Come inside!"

While modern journaling file systems are certainly more complex in how they lay data out on the disk (and therefore more difficult when it comes to forensic recovery), individual files still need to be reasonably contiguous. Otherwise you'd get horrible performance on mechanical drives since everything would look like random access even when streaming a file sequentially.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

My point is that the subject is complex and that things don't work the way you might expect them to or even the way the used to more than a decade ago. How many people know that file systems has changed their originally sequential nature? It sure was news to me when I learned of it years ago. I don't even see the subject actively discussed.

Assumptions about "common sense" may not lead you where you should go. Yes, and HDD need some level of contiguous file allocation, but how much does it really need? How perfectly defragmented does the disk need to be? How much more does data locality matters versus contiguous data? If all the data is on the same track does it really matter that it is fragmented? The first two are questions without easy answers (though I'd argue not as much as people believe) the latter two questions are rhetorical. Locality does matter more and data on the same track that doesn't require the head to move doesn't need to be contiguous. Modern data density has created a lot of locality.

I'd also note that many others major enterprise file systems like ZFS lack a defrag tool. For quite some time even ext3 and ext4 didn't have a defrag option either (now e4defrag).

In my opinion of course, far too much effort is expended in defragging, for something that today has a negligible impact. Talking of applying such traits to an SSD feels akin to discussing using thermonuclear weapons to fell a forest.

It certainly will get the job done in an expedient manner, but you've stirred up all sorts of discussions of acceptable returns and the secondary consequences of your actions.

As you've so wisely noted before on this forum, we haven't discussed the sort of collateral damage that can result from defrag + bad RAM.

"Welcome back my friends to the show that never ends. We're so glad you could attend. Come inside! Come inside!"

While I don't agree with your sentiments this is the part I have issue with.

Ryu Connor wrote:In my opinion of course, far too much effort is expended in defragging, for something that today has a negligible impact. Talking of applying such traits to an SSD feels akin to discussing using thermonuclear weapons to fell a forest.

Ryu Connor wrote:I'd also note that many others major enterprise file systems like ZFS lack a defrag tool. For quite some time even ext3 and ext4 didn't have a defrag option either (now e4defrag).

Yup. I wasn't even aware that e4defrag was now considered stable enough for production use; I just checked my Ubuntu 12.04 system and it's there so I guess it's finally "ready for prime time". TBH I never really missed the defrag tools on ext3/4 -- these file systems seemed to be good enough at minimizing fragmentation that lack of a defragger wasn't a serious hardship.

Ryu Connor wrote:As you've so wisely noted before on this forum, we haven't discussed the sort of collateral damage that can result from defrag + bad RAM.

Agreed. In fact, this may be the biggest single argument against doing unnecessary defrags, especially on systems without ECC RAM (the vast majority of consumer PCs). Even "good" RAM can suffer from the occasional flipped bit if you don't have ECC... and every time you move the data around, you expose yourself to this risk since it gets moved through RAM.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

I already mentioned that. The controller isn't going to write everything as the OS specifies, it's going to write it for its own benefit. It doesn't matter if it's sequential on the hardware...if the placement is optimal, its going to report no fragments.

For win 7 and 8 it is best to just let the system handle it and remember to leave the puter n from time to time to let it do its stuff.

For ssd never ever EVER defrag as its not a bloody disc so it doesnt need it EVER. You only needed to defrag discs because of where the data could wind up on the dang disc and how that effected access times.. You dont give a flyin fig where the data is on a ssd as every place is the same access speed wise.

wintermane666 wrote:For ssd never ever EVER defrag as its not a bloody disc so it doesnt need it EVER. You only needed to defrag discs because of where the data could wind up on the dang disc and how that effected access times.. You dont give a flyin fig where the data is on a ssd as every place is the same access speed wise.

It depends on what you mean by "defragmenting" an SSD. Some modern products, such as Raxco PerfectDisk, actually detect that you're working with an SSD and behave differently, implementing space consolidation algorithms without moving file chunks around:

Raxco PerfectDisk Pro blurb wrote:SSD Optimize is an optimization method for SSDs that focuses on free space consolidation without defragmentation of files. Solid State Drives are not affected by file fragmentation like traditional electromechanical disk drives. As such, it will leave files in a fragmented state while consolidating free space into large pieces.

There does seem to be a mystique about defragmentation amongst some computer users. My brother in law occasionally runs into trouble with his computer and phones me for assistance, one of the first phrases I will hear him utter are "I've defragmented the hard drive but that hasn't helped"... I have explained to him several times what defragmentation is and how Windows 7 does it automatically and that nothing about any of his problems ever points to file defragmentation being the issue but alas - he is a true believer.

The few times I have manually defragmented a drive in recent occasion it has been in preperation to make a backup clone of a system partition (using CloneZilla), however given earlier comments in this thread I understand even that was an unnecessary waste of time! I can understand people still on mechanical system drives defragmenting their hard drives to aid a faster boot time but in my experience it just seems to unnecessarily consume time and doesn't have any greater efficacy then the use of software such as Startup Delayer - http://www.r2.com.au/page/products/show/startdelay - which I've added to quite a few machines over the years.

puppetworx wrote:There does seem to be a mystique about defragmentation amongst some computer users. My brother in law occasionally runs into trouble with his computer and phones me for assistance, one of the first phrases I will hear him utter are "I've defragmented the hard drive but that hasn't helped"... I have explained to him several times what defragmentation is and how Windows 7 does it automatically and that nothing about any of his problems ever points to file defragmentation being the issue but alas - he is a true believer.

This mindset is actually encouraged by the scripts used by front line tech support people. Several years ago I was trying to fix an HP laptop that wouldn't POST. Called their support number; BIG mistake. First thing they wanted me to do was defrag the hard drive. I had to explain (repeatedly!) that it was impossible to do so because I couldn't even get into the OS, and it wouldn't have done any good even if I could.

I have to wonder if "defrag the hard drive" was just a euphemism for "we're too busy right now, please go away for a couple of hours and call back again after the defrag doesn't fix your problem".

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

wintermane666 wrote:For win 7 and 8 it is best to just let the system handle it and remember to leave the puter n from time to time to let it do its stuff.

For ssd never ever EVER defrag as its not a bloody disc so it doesnt need it EVER. You only needed to defrag discs because of where the data could wind up on the dang disc and how that effected access times.. You dont give a flyin fig where the data is on a ssd as every place is the same access speed wise.

Access times aren't the problem on SSDs. The difference between random and sequential access is still significant. Might not affect most people, but definitely affects games.

Buub wrote:It depends on what you mean by "defragmenting" an SSD. Some modern products, such as Raxco PerfectDisk, actually detect that you're working with an SSD and behave differently, implementing space consolidation algorithms without moving file chunks around.

Last I tried that it made things considerably more fragmented and noticeably hurt performance so that new files went in unfragmented. It was a bad idea, and I don't know why they thought otherwise.

wintermane666 wrote:For ssd never ever EVER defrag as its not a bloody disc so it doesnt need it EVER. You only needed to defrag discs because of where the data could wind up on the dang disc and how that effected access times.. You dont give a flyin fig where the data is on a ssd as every place is the same access speed wise.

Access times aren't the problem on SSDs. The difference between random and sequential access is still significant. Might not affect most people, but definitely affects games.

I'm not even sure what you're trying to say here. I find it difficult to believe that random vs. sequential makes a perceptible difference on an SSD unless you are reading or writing lots of data in *very* small chunks.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

just brew it! wrote:I'm not even sure what you're trying to say here. I find it difficult to believe that random vs. sequential makes a perceptible difference on an SSD unless you are reading or writing lots of data in *very* small chunks.

Oh sheesh, I'm sorry for the long post. But I wanted to share my experience and thinking on this, for whatever it's worth.

just brew it! wrote:

BIF wrote:I do still partition my drives, for the same reasons that the OP raised:

1. Flexibility to avoid backing up low-change data more often than necessary.2. Flexibility to reduce defrag frequency for some partitions because defrags cause incremental and differential backups to grow in size. <snip>

By definition, incremental/differential backups only back up data that has changed, so #1 and #2 are contradictory.

Not contradictory, because the backup methodology involves more than just hardware and software: Don't forget the wetware.

Furthermore, most incremental/differential backups operate on files, not on raw disk blocks; what matters is whether the contents of a file has changed, not where the file's blocks physically reside on disk.

This may be so in theory, but it has not been my observation, at least not with respect to Acronis True Image (my old backup software) and Diskeeper. If even a single 4K block is moved, the WHOLE FILE gets marked as "changed", and therefore gets picked up in the next incremental backup run. Some VST container files are really quite huge and being already compressed by the software maker, they are not very compressible by a backup program.

It's possible that Macrium only backs up blocks/sectors and that this won't be an issue.

So defrags should have no effect on what an incremental backup tool considers to have "changed".

Again, this is not in agreement with my observations, which typically went something like this before I segregated my data (this was with Acronis):

1. Schedule runs a full backup. Let's say it's a drive that contains a lot of data and the full backup is 500 GB; backed up to a 1.5 TB backup drive.2. I use the system for a week or so. During that time, I install and update no software; but I use Office and web browser.3. Schedule runs the first incremental backup.4. I observe that size of incremental backup image is 300-350 GB.5. Ask myself the question: WTF? 6. Another week or three pass. I now have several incremental images that approach anywhere from 25% to 75% of the size of the original full backup image. Now my 1.5 TB backup drive is full.

Research reveals that many of my *.WAV, *.NKI, and *.NKS files have been backed up by one or more incremental runs. Some of these are downright HUGE, so now I have many versions of them within the incremental backups.

But why? As noted in 2 above, I have not installed or updated software. For background, VST instruments don't receive a lot of updates and I certainly don't update them on a day-to-day or even month-to-month basis.

Answer: It's DEFRAG. Either Windows defrag, Diskeeper, or "other"; whichever one you have running on its schedule. It's moving stuff around and flipping on the change bits, making incremental backups big.

Diskeeper uses IFAAST, which will move pieces/parts around as the file system statistics indicate. It's not uncommon for my VSTi partitions to have all the big container files at the END of the partition with the freespace in the middle and the frequently-updated files (what few there may be) placed at the beginning of the partition. This happens over many days and weeks, and not all at once; which explains why I end up with several iterations of big incremental backups.

Eventually things quiet down and the partition is in a plateau state with not much being defragged. But once you resize the partition or update the software on it, then it starts all over again.

The initial decision to partition my drives was to give me a little bit more granularity with regards to backups and defrags, and it has been a good decision for me.

I am thinking ahead, however; and I know that in the coming years, moving more and more of my data to SSDs will reduce my need for a defragger. Even now (because half of my VST instrument samples are already on SSDs), I'm not convinced that Diskeeper is worth the money. I am trialing the latest version, and I may or may not make the upgrade purchase. If I do, it may well be my last one ever, because I expect that in 5-7 year's time, I'll only be using HDD media for backup storage and NAS devices; nothing that will need the performance improvement offered by a defragger.

Today's SSDs are not huge, so I will STILL end up with several partitions for the foreseeable future; just not for the original reasons.

Years ago, I wanted my backup jobs to run quickly, to avoid long windows of sluggish response produced by intense disk I/O and the CPU being busy compressing/encrypting my backup data, especially while using the DAW software, which could be prone to audible clicks, pops, dropouts, and even crashes. Therefore, I wanted backups, defrags, and antivirus scans to only run when I was sleeping or at my day job.

I will admit that new hardware today can take a lot of punishment and backups/defrags seem to not impact my music stuff like it did in the past. Hence, some of the original reasons for segregation are not as strong as they used to be. I have simplified my partitioning strategy accordingly, and I've also not established a strict schedule for maintenance like I did in the past. I will continue to re-examine my strategies with each new system I build; though the next one may not happen until 2018-2020. If we still have PCs, that is.

Acronis isn't a traditional backup tool. It has nothing to do with what JBI is talking about.

Defrag does not change the archive attribute, which is what a traditional backup uses to define new or modified files. In fact a restore from a traditional backup would result in a disk that is contiguous. Not so with Acronis, due to the fact that it does not backup files.

It didn't hit me until Ryu's post that you were attempting to do incremental backups at the block level.

Your issue is that you are attempting to use a disk imaging product to do periodic incremental backups. TBH this is sort of like using a screwdriver to hammer in a nail. Imaging products work at the block level, not the file level. Doing incremental block-level backups is fundamentally incompatible with defragging (and quite frankly, may not be a particularly good fit for journaling file systems either, depending on how the file system is being used).

Defragging doesn't change the archive bit, or the modification timestamps of files. These are the mechanisms that normal (file-based) incremental backup tools use to detect whether a file has changed. But a block-level incremental tool is just going to look at whether the contents of each physical disk block has changed; it isn't going to search the entire disk to figure out if the data for that block has been moved elsewhere and record that fact; it is just going to back the entire block up again.

If you're going to use incremental block-level backups on a file system that gets defragged, the only sensible way to do it is run a full backup after each defrag, and run incrementals only between defrags. A file-based incremental backup tool would be a much better solution.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson