Met up with my local Veeam SE Joe today, and he gave me a good tip. With Veeam if you turn compression off, and dedupe in Veeam to minimal you can configure Server 2012 to dedupe the incremental files, and leave the current Full (when your doing a reverse incremental job). He said he saw around 65% dedupe on the reverse incremental, and this allowed you to keep a mostly Hydrated Full backup ready to restore super quick a swell as improve backup performance times. This sounded like a pretty good solution to dumping to a VTL (datadomain, storeonce) where your recovery speed can be pretty limited by the rehyradition speed of your appliance.

This also gives Veeam Global Dedupe (So you don't have to try to stuff everything into the same job) and allows for far better CPU/Memory performance as dedupe is done as a process with the server's rolling hash calc, and not as an inline process.

This person is a verified professional.

This also gives Veeam Global Dedupe (So you don't have to try to stuff everything into the same job) and allows for far better CPU/Memory performance as dedupe is done as a process with the server's rolling hash calc, and not as an inline process.

This person is a verified professional.

Met up with my local Veeam SE Joe today, and he gave me a good tip. With Veeam if you turn compression off, and dedupe in Veeam to minimal you can configure Server 2012 to dedupe the incremental files, and leave the current Full (when your doing a reverse incremental job). He said he saw around 65% dedupe on the reverse incremental, and this allowed you to keep a mostly Hydrated Full backup ready to restore super quick a swell as improve backup performance times. This sounded like a pretty good solution to dumping to a VTL (datadomain, storeonce) where your recovery speed can be pretty limited by the rehyradition speed of your appliance.

This also gives Veeam Global Dedupe (So you don't have to try to stuff everything into the same job) and allows for far better CPU/Memory performance as dedupe is done as a process with the server's rolling hash calc, and not as an inline process.

You could do it with anybody else doing proper dedupe for years. Check offerings here:

Also it makes worth to mention it's TARGET dedupe and not GLOBAL dedupe. As no referenced solution except unreferenced StoreOnce from HP shares deduplication base between targets (MS cannot do it even for LUNs on the same server).

This person is a verified professional.

True kooler, but being able to stack small jobs with different Schedual needs is nice. Considering the cost for a modest 6TB VTL is still crazy this I'd a interesting method. What's nice is this lets you have it both ways. Fully hydrated last night full and dedupe on the old stuff. If I'm using a DD or something my full on that target will have to rehydrate ( everyone always brags about ingest speed never recovery speed...

This person is a verified professional.

It's a great tip, and when you stop and think about it, it's really a no brainer. I'm going to have to look over my setup for unitrends this weekend and see if I can manage similar.

Unitrends doesn't perform any dedupe functions until the next backup runs since the majority of restores, other than full DR restores are done from the most recent copy of your backup. This eliminates extra time spend having to do hydration, etc.

1st Post

Hi All,

This is my first post so I don't want spam any links or come across as spaming links. I host a small web show at Veeam and we actually did a full episode with a few colleagues on this exact topic that Joe mentioned to John. I can PM the links or post them to the episode if it's okay.

The results are indeed great and can complement what we already do on the de-dupe/compression level. There are some considerations to look at like Instant VM restore or SureBackup Jobs.

This person is a verified professional.

True kooler, but being able to stack small jobs with different Schedual needs is nice. Considering the cost for a modest 6TB VTL is still crazy this I'd a interesting method. What's nice is this lets you have it both ways. Fully hydrated last night full and dedupe on the old stuff. If I'm using a DD or something my full on that target will have to rehydrate ( everyone always brags about ingest speed never recovery speed...

How's the dedupe performance on your arrays?

Sure. "It's always good to have an option" (c) ... Ideal case - having the same code running as in-line (primary data storage) and off-line (backup).

For now with generation 2 it's ~40% of raw array performance (so dedupe adds overhead and chews memory). With generation switch we're expected to have in December-January we'll have dedupe accelerating I/O because of a log-structured file system design. But in a nutshell it means: VM storage only. We're moving away from backup to primary storage scenarios (difficult to sell for money better dedupe then MS offer for free).

This person is a verified professional.

It's a great tip, and when you stop and think about it, it's really a no brainer. I'm going to have to look over my setup for unitrends this weekend and see if I can manage similar.

Unitrends doesn't perform any dedupe functions until the next backup runs since the majority of restores, other than full DR restores are done from the most recent copy of your backup. This eliminates extra time spend having to do hydration, etc.

Hydration does not take any time. It's actually faster then linear read as LESS data is read from spindles and cache is working more effectively - does not get purged too fast. I'd suggest you to talk to you engineers as you tell something sounding Greek at least to me...

This person is a verified professional.

It's a great tip, and when you stop and think about it, it's really a no brainer. I'm going to have to look over my setup for unitrends this weekend and see if I can manage similar.

Unitrends doesn't perform any dedupe functions until the next backup runs since the majority of restores, other than full DR restores are done from the most recent copy of your backup. This eliminates extra time spend having to do hydration, etc.

Hydration does not take any time. It's actually faster then linear read as LESS data is read from spindles and cache is working more effectively - does not get purged too fast. I'd suggest you to talk to you engineers as you tell something sounding Greek at least to me...

If this where true for all dedupe, why would avamar/datadomain only have like 30GB per hour per node? If your layering high levels of dedupe/compression, unless your being SUPER aggressive with read ahead cache, I get how hydration can slow stuff down with spinning disk. Now Flash, thats a different story (and how Pure and other accelerate things).

This person is a verified professional.

If this where true for all dedupe, why would avamar/datadomain only have like 30GB per hour per node? If your layering high levels of dedupe/compression, unless your being SUPER aggressive with read ahead cache, I get how hydration can slow stuff down with spinning disk. Now Flash, thats a different story (and how Pure and other accelerate things).

This person is a verified professional.

I don't know why more vendors don't give product walk-throughs from SEs to IT managers.

Unitrends does this twice weekly - Wednesdays at 2PM EST and Fridays at 11AM EST, and during the month of November, we're donating $5/attendee to the Hurricane Sandy Relief Fund. You can find out more here. We can also schedule a private session if these times don't work for you.

This person is a verified professional.

I don't know why more vendors don't give product walk-throughs from SEs to IT managers.

Unitrends does this twice weekly - Wednesdays at 2PM EST and Fridays at 11AM EST, and during the month of November, we're donating $5/attendee to the Hurricane Sandy Relief Fund. You can find out more here. We can also schedule a private session if these times don't work for you.

And that's a great service, Katie. But Veeam's Whiteboard Friday is a series of 20+ of this type of discussion building from the fundamentals of the solution, up to intricacies of deployment details, available on-demand. Granted, their line card is a little bit longer than yours, but not outrageously so.

This person is a verified professional.

With Veeam if you turn compression off, and dedupe in Veeam to minimal you can configure Server 2012 to dedupe the incremental files, and leave the current Full (when your doing a reverse incremental job). He said he saw around 65% dedupe on the reverse incremental, and this allowed you to keep a mostly Hydrated Full backup ready to restore super quick a swell as improve backup performance times. This sounded like a pretty good solution to dumping to a VTL (datadomain, storeonce) where your recovery speed can be pretty limited by the rehyradition speed of your appliance.

OK, next idea. Can we use Server 2012 as the storage layer to do thin reclaim below eager-zero thick VMDKs?

This person is a verified professional.

With Veeam if you turn compression off, and dedupe in Veeam to minimal you can configure Server 2012 to dedupe the incremental files, and leave the current Full (when your doing a reverse incremental job). He said he saw around 65% dedupe on the reverse incremental, and this allowed you to keep a mostly Hydrated Full backup ready to restore super quick a swell as improve backup performance times. This sounded like a pretty good solution to dumping to a VTL (datadomain, storeonce) where your recovery speed can be pretty limited by the rehyradition speed of your appliance.

OK, next idea. Can we use Server 2012 as the storage layer to do thin reclaim below eager-zero thick VMDKs?

For backup purpose - absolutely (one of the scenarious of our own backup does this way). For live VMs you cannot do this...

0

This topic has been locked by an administrator and is no longer open for commenting.