Let’s head to PowershellGallery website, and find the latest stable version for PowerCLI https://www.powershellgallery.com. As of today, version “7736736” is still in beta. So we are going to use “-RequiredVersion” option to download and install the previous, stable, release.

Note: you might need to press “A” to allow install to proceed, as PSGallery might not be in your trusted list of repositories to download and install modules from:

Ideally, you should now be able to run any PowerCLI cmdlet and it would work as expected.

Note: You must uninstall older ( non-PovershellGallery one) versions of the PowerCLI manually from Programs and Features, and then re-install VMware.PowerCLI via PowershellGet, otherwise they will conflict with the new version, and it will only work partially.

Please, see a nice write-up below in case if the VMware.PowerCLI don not load automatically.

Recently, I had to re-install OS on my primary workstation at home. And of course, I had to get PowerShellGet working again before I could install PowerCLI, or vDocumentation modules. Naturally, it took some time to hunt down the bits and pieces on getting PowerShellGet up and running. So, this time around I have decided to document getting PowerShellGet working on Windows 7, as well as making sure that I am using the latest version of it.
Windows 7 Pro/ENT by default comes with PowerShell v2, and doesn’t contain PowerShellGet.

All the commands are run with elevated (a.k.a administrator’s) privileges under x64 bit Powershell console.

let’s start the PowerShell console and see currently available modules:

I make use of deduplication feature on Windows Server 2012, and Windows Server 2016 quite extensively. And, it does wonders. Its deduplication ranges from 10% (wsus content, backup server) to upwards of 55% ( File Share servers and SCCM Distribution Points with ISOs and .wim images, as well as packages and applications).

Recently, a case was brought to my attention where a drive with wsusContent on freshly installed Windows 2016 server has started running out of space. Initial drive size was about 100GB, and after fe Continue reading →

Ah yes, you have had enough of it. Wsus content folder keeps growing in size day by day. You have tried feeding it 20 GB, then another 50, then 100 GB. No, it won’t stop growing and keeps demanding more and more space.

No worries, there is a solution for that… Maybe… in any case as gecko says ” 15 minutes could save you 15% or more on” of your valuable disk space on your (v)SAN storage

Solution 1) Have workstation download updates from MS directly

Downside; traffic, bandwidth, time, multiple unknown variables on how long, amount of bandwidth, cost of bandwidth

AWS ‘s Storage Gateway solutions are designed to be used as a backup destinations for your infrastructure. There are 3 types of Storage Gateway solutions offered by AWS: File, Volume, and Tape Gateway.

Overall process is, you either deploy a local on-premises VM ( Hyper-V/Vmware VM), or a cloud based one which is in turn of course runs on AWS EC2 instance. You need to add an additional virtual disk to the Storage Gateway, to cache the data before it uploads it. The disk size has to be a minimum of 150GB, and you can add several drives for a total of 16 TB in size across all drives. You can’t allocate the drive with 150GB to begin with, and then increase its size down the road, you will have to add a new disk, if you want to increase the cache size.

There is an additional requirement for Volume and Tape Storage gateways; you will need to have an “Upload Buffer” drive(s)along with caching drives. Upload buffer drive has to be a minimum of 150GB and a maximum of 2TB in size.
As name suggests, Upload buffer’s purpose is straight forward; backup data from the cache drives are transferred to the upload buffer drive, and afterwards it gets copied to AWS’s storage, then buffer gets re-filled from the cache drive with more data, and so on.
Cache drives purposes is on the other hand is twofold; besides pumping more data into Upload Buffer, it keeps the cache of your most recent backup data, depending on the cache drive size. It will check the cache drive to see if the data is still available on the cache drive, ifthat is the case, then you don’t have to pull data down from AWS storage, and of course not incurring data transfer (charged per GB of data retrieved) charges from AWS.

Data Storage:Compression, de-duplication or deltas ?

File based Storage gateway (NFS) doesn’t make use of any compression or de-duplication mechanisms. But as per FAQ ” uses multipart uploads and copy put, so only changed data is uploaded to S3 which can reduce data transfer”. Basically, it will compare your current file with the one that was already uploaded, and upload only changed bits, which is still good, and should reduce Continue reading →

Plan is to retain current WSUS data and configuration while moving the SUS service from old Windows 2016 TP 5 server to an new fully licensed Windows 2016 Standard server, and move database from WID to a standalone SQL 2014 server.

1) Setup a new Windows 2016 Server, update. Patch, reboot. Install the WSUS role on it, choose the WID database during the install. Make sure to point to a drive\folder for wsusContent.
a. Copy the wsusContent folder from old server to the new one. Make sure you placing it to the proper drive\path, you identified during post-install configuration for new WSUS service.Continue reading →

I recently got my hands on teradici’s 2800-lp (low profile) offloading card for the a Horizon view 7 VDI PoC implementation. The one I am using is a PCI Express version of the card that can be installed on any server with PCIe gen2 x4/x8 or x16 slot.

There are also MXM Type A with Mezzanine Adapter and an Amulet Hotkey DXM-A versions. They are designed for HP’s Gen8 and 9 blade servers, and for Dell M Series blades respectively. In either case, standalone, mezzanine adapter, or an amulet hot key, you can install up to 2 of such cards per server.

There are plenty of choices on the market for a GPU offloading, some of which are Nvidia’s Grid K1/K2, Nvidia Tesla K40 /K80, and AMD’s FirePro S7150 (x2) GPU cards.