Using latest D9 install. Been having an issue with some machines where when they boot the mapped drives are made (login script) but they are red crossed, and the slave cant get access, even though the drives are mapped.

I got round this by using group policies not the login script and these machines now boot with the drives OK, but it seems the slave is starting too soon, as it still doesn't render with the files from the mapped drives! Is there anyway to delay the start of the slave process maybe?

Or has anyone else had this issue and worked out a good work a round? It means I have to manually start these machines and check the drives before they can render, resulting in me forgetting and black frames rendered!

So looking at the two option, I'll def get the mapped drives setup now, and see how that helps - might be just the ticket!

Regards the other option, we have the render machines start with the launcher not as a service, and set to start the slave on launcher startup, does this setting still work in this case, its worded like its mainly for when the launcher is running as a service? Is there any benifit to having it run as a service out of interest?

There are usually two reasons to have the Launcher and Slave run as a service. The first is that people might have physical access to the machine and you're in a company large enough where having people poke around on render node machines could be a problem. The second is if you want to render on a workstation and not directly interrupt someone's work. We have settings in Deadline to clamp the Slave's CPU affinity (Windows / Linux only). Not having to see the Slave and all the apps it opens is pretty helpful there.