Can you expand on this a bit? What would a clustered configuration look like? What is the “clustering” method, other than just separate instances behind a load balancer? I assume the instances would share SQL and Redis databases (themselves being clustered)? How should gateway connections be “allocated” across the LoRa Server instances? Does gateway management and deduplication just work “magically” in such a configuration, or are there other factors to consider?

LoRa Server connection to LoRa App Server (and back)

gRPC is a HTTP based RPC framework, so to make both LoRa Server and LoRa App Server HA, you can run multiple instances of each service and put them behind a load balancer. Both LoRa Server and LoRa App Server are designed so that you can run multiple instances, so this will not break de-duplication etc.

PostgreSQL

Redis

It appears LoraServer uses the ‘garyburd/redigo’ Redis client? I wasn’t able to find any information indicating that this client supports Sentinel. Has anyone actually setup high availability for Redis (with automatic failover). If so, how did you do it?

So is it correct to say that all my network servers will behave as home network servers for all the devices? Is it that each NS in the cluster is exact replica of the other and all connected to same set of MQTT topics?

This isn’t directly related to loraserver but more of a common problem. There’s plenty of information out there, and what’s useful will depend on your particular needs. But if it’s only a quick start to load balancing you need, you could check this out on how to achieve it easily using Nginx as reverse proxy and balancer.

Please use this as a way of getting started and do not follow it blindly. This is just a basic example on this topic, and you should do your research to implement this correctly. Consider also that this only deals with load balancing, and running clusters of Postgres and Redis is a whole other thing. Be sure to check the links Orne posted, and for Postgres give this a look too.