I have tried adding full access even to everybody and still get this error, the problem I'm now having is I can't join the other host to the cluster, it was running for about an hour and I did updates, which I've since removed still can't join the other
node.

So I have two issues

1) Can't raise functional level because of permission.

2) Can't join another node to cluster. When I tried to create a new cluster with only this node the functional level was 10.3. That's why I'm assuming it will not join the 9.8 cluster.

here is error I get adding to cluster:

* Cluster service on node NYFBH2 did not reach the running state. The error code is 0x5b4. For more information check the cluster log and the system event log from node NYFBH2. This operation returned because the timeout period expired.

* The server 'NYFBH2.****.local' could not be added to the cluster.
An error occurred while adding node 'NYFBH2.****.local' to cluster 'NYFB_Cluster'.

>>I have tried adding full access even to everybody and still get this error

Which account did you log on currently?Local administrator or Domain administrator?

>>it was running for about an hour and I did updates, which I've since removed still can't join the other node.

Which hotfix did you patched?Could you please provide the detail information?

>>Cluster
service on node NYFBH2 did not reach the running state. The error code is 0x5b4

It is a general error, we need check cluster logs if we would like to know the root cause.

(Please understand, to solidly troubleshoot the issue, we generally need to debug the cluster log files. Unfortunately, debugging is beyond what we can do in the forum.
If the issue is urgent, a support call to our product service team is needed for the debugging service. We'd like to recommend that you contact Microsoft Customer Support Service (CSS) for assistance so that this problem can be resolved efficiently. To obtain
the phone numbers for specific technology request please take a look at the web site listed below:

Note: please backup registry first and do those steps on all node.
Modify the registry key like below (if there is no such key, please create it manually)
-----------------------------
Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters\
Name: DisabledComponents
Type: REG_DWORD
Value: 0xFF

2. Reinstall Microsoft Failover Cluster Virtual Adapter

1.Open "Device Manager" and Select "Show All Hidden Devices" under View option.
2.Under Network Adapters, you should see the driver "Microsoft Failover Cluster Virtual Adapter ". Click Uninstall.
3.We need to install the driver manually.
4.Choose an option "Add Legacy Hardware" under Actions menu.
5.You have to select the option "Install the hardware that I manually select from a list".
6.Select "Network Adapters" and proceed with Next
7.Choose "Have a disk". Then browse and specify "C:\windows\Inf" folder. Select “NETFT.INF”.
8.Choose "Microsoft" under Manufacturers and specify "Microsoft Failover Cluster virtual adapter" in the list.
9.Then click on Install. This will install the driver successfully.

3.Rename the node which can not be added.

Best Regards,

Frank

Please remember to mark the replies as an answers if they help.
If you have feedback for TechNet Subscriber Support, contact
tnmff@microsoft.com

Based on my research,in order for this cmdlet Update-ClusterFunctionalLevel to succeed, all nodes must be Windows Server
2016, and all nodes must be online.

2012R2 and 2016 nodes can run in the same Cluster as long as it is at the 2012R2 Cluster functional level. 2016 and 2019 hosts can run in the same cluster as long as it is at the 2016 Cluster functional level. These are the only combinations
you can have when there is a mix of OS's.

1) It is a physical cluster running on two HP Servers that are identical except the hard drive model numbers.

2) the cluster has two nodes NYFBH1 and NYFBH2 Both are running server 2019 desktop experience, with HyperV role and failover cluster installed

3) they were both 2012 r2, upgraded NYFBH2 to 2016, copied roles to new cluster in 2016, upgraded NYFBH1 to 2019, joined cluster, moved roles, upgraded the 2016 to 2019, all upgraded in place, once the last 2019 upgrade was done on NYFBH2 the cluster was
functional until I did the windows updates on NYFBH2, I had thought the network messed up so recently went back to 2012r2 on NYFBH2 with a restore and did a inplace upgrade from 2012 r2 to 2019 cluster worked again until windows updates were complete
on NYFBH2, I attempted to create a standalone cluster on NYFBH2 and it created one fine, the version was 10.3 on nyfbh2, the version as it sits is 9.8 on NYFBH1 so i'm thinking the cluster functional level is the culprit and because I can't upgrade with the
powershell command i'm kind of stuck. I can go and try to upgrade back to 2012r2 and upgrade functional level before doing the windows updates, but I don't think it will help because I can't upgrade functional level with only NYFBH1 as the only node.
My other option is to create the new cluster on NYFBH2 that is a 10.3 functional level and try to join NYFBH1 to it while it's at 9.8 functional level.

Thanks for your help on this.

I tried the few things you suggested and no go.

Just a note i'm trying to preserve a fileservice role that has a ton of shares I don't want to recreate in a new cluster but it may be my best option. I would have to do over weekend while users are not using the servers. That is also why I thought maybe
going back to 2012r2 upgrading to 2019 again and moving roles off to NYFBH2 before I do windows updates and evict nyfbh1 so the roles don't drain to it during a reboot.

I'm also thinking the permission error I get when trying to up the functional level has nothing to do with permissions.

Thanks for your help I think I'm moving in a direction that others would be doing as well, I don't want to rush to judgement and call it a bug, it's more of a timing thing, I also realize the supported method is full rebuilds and not in place upgrades.
Was hoping to same some time and the in place upgrades other than the cluster went very smooth.