I am having 5 IP addresses pointing to multiple sub domains and are being hosted on different servers.
I was able to create certificates for sub domains pointing to single IP address but not for different IP addresses. I have tried using webroot path and manual authentication but no luck since other sub domains are being hosted on different servers LE is not able to validate I guess.

Since I am using failover for those domains SSL Verification is failing by saying mismatch occurred. When it is pointing to the Correct CNAME record no issues, if the server is down and if we change the CNAME record to other sub domain mismatch occurs. All I need a single Certificate for all sub domains pointing to different IP. Is there a solution for that?

Using the official certbot client, then using the DNS challenge is the easiest way of doing this.

If you don’t have access to your DNS via an API (without which it’s more difficult to automate the process), then try the GetSSL alternative client, which was written specifically for working with domains on remote servers. Note: I wrote the GetSSL script, so am not totally unbiased in my answer here

You can, yes - the certbot client doesn’t currently allow you to do that over different servers - GetSSL does though, so you can use that for the http challenge with domains on different remote servers.

@schoen Thank you.
I have mounted the root path locally when I run the ./letsencrypt certonly --webroot --webroot-path /mnt/host/ --renew-by-default --text --agree-tos -d <domain name>. It is saying /mnt/host/ does not exist or is not a directory . It`s permissions issue I guess.

You could also use something like rclone to sync your ssl certs to multiple cloud storage providers and configure each server’s vhost to look for them at the sync’d paths. for my Centmin Mod LEMP stack working on addons/rclone.sh https://community.centminmod.com/posts/39190/ which will be able to sync my letsencrypt ssl certs to say Dropbox, Google Drive or OneDrive or Amazon S3 and have my nginx servers look to the local sync directory for ssl certs instead.

I have mounted all the web root directories locally using sshfs when I used staging server it has given response as The dry run was successful.
But when I tried to run it on the production server it was saying Internal Server Error.
The command I have used is./certbot-auto certonly --text --agree-tos --webroot --webroot-path /mnt/host1/ -d <Domain Name> --webroot-path /mnt/host2/ -d <Domain Name1> -d <Domain Name2> --webroot-path /mnt/host3/ -d <Domain Name> --webroot-path /mnt/host4/ -d <Domain Name> --webroot-path /mnt/host5 -d <Domain Name>.

Incident Status Partial Service Disruption : acme-v01.api.letsencrypt.org (Production)
October 31, 2016 5:44AM UTC[Investigating] We are looking into a problem causing some users to experience errors when attempting to issue a certificate.

Technique I am using might be helpful. I have multiple domains and subdomains providing by same server and multiple servers providing for same domain for load balancing. What I did was I proxied all /.well-known/ requests to any of my webservers to another server used specifically for renewing certificates. Let’s call this server crt-srvr.

I create or renew all certificates on this server only and I can do it for any domain I own, because all http/s requests to .well-known directory are going to this server. After certificates are renewed, I just distribute them to servers that needs them. For that purpose I have directories with certs mounted as shared NFS on said servers. It’s easier than physically copy them on dozens of machines. On said machines I have a cron job that just reload webserver every night, but I’m thinking about creating a script. Little dirty solution but I’m too lazy to make better one.