The server group functionality is nice, but I've been struggling to find any documentation describing what happens when deployments are applied to collections with this setting enabled. With no custom node drain or resume scripts applied, are the nodes simply rebooted without any sensitivity to the cluster resources? Are there any example node drain/resume scripts available? What I'd eventually like to see is a more true integration of Cluster aware updating and how that process proceeds with SCCM. Right now, SCCM can be used to select specific patches to place in a SUG and then deploy them, but using SCCM to patch clusters is still nebulous and feels..dangerous. Cluster aware updating is much healthier for the clusters, but doesn't have any connection to SUGs in SCCM meaning that WSUS has to be directly managed as well.

It would be prudent to 1. include basic documentation on use of feature 2. the feature does appear to be functional but has issues in 1606, tech preview for 1609 does not even mention this feature or improvements to it. 3. This feature should be a number one priority as it is a basic requirement for datacenter automation. 4. No new version preview or production should be released without updates to this feature which is currently broken.

Hi, I did try this out with a 2 Node cluster. The following happened:
allowed offline 51% - both rebooted at the same time
allowed offline 50% - both rebooted at the same time
allowed offline 49% - none of them rebooted: even though the restart windows indicated, that an automatic reboot will happen, after the grace period, nothing happened. both windows were showing 00s remaining, none of them rebooted.

What would be great, would be to be able to create one collection and all clusterer machines as members of it and that sccm could handle that in once. For the moment we need to create one collection for each cluster. I need to work with maintenance windows, so creating x collections + x maintenance windows is not feasible for me. For the tests I had with TEchnical Preview 3, it's a lot of work to configure and maintain it. One collection with all clusters and same maintenance window should be a very great improve. Else i'll continue with Orchestrator runbooks to handle the clusters patching.

You should consider to include following functionality and error checking:

What should happen if one server in a sequence fails updating? Should the sequence stop and perform a rollback and report the error.
(generally, the ability to rollback software updates deployed by SCCM would be nice)
At least make it possible to configure if the sequence should stop or continue in case of an error.

Using scenario “Specify the maintenance sequence”
Each step should have pre and post script options, including error handling (return codes). This is required to be able to control if a cluster resource is successfully moved to another cluster node before continuing the update sequence

Make sure that the function works with both software deploy and Windows updates.

Make sure that there are good logs. I can foresee some issue troubleshooting a failed update sequence if logging is missing.

Automated Support for patching SharePoint Farms (without having to take the entire farm offline)

Validate cluster services are online pre / post patching

Improved In-console Monitoring for patching critical servers like cluster > more detailed state messages sent back to server > state messages for critical server patching sent through with a higher priority (like the state messages for SCEP are)