How can I build a High Availability architecture?
Thanks
How can I build a High Availability architecture?
Thanks
You can run a cluster of multiple LoRa Server and LoRa App Server instances. This should work without any issues. You only need to put a load-balancer in between the APIs.
Also you can look into setting up a cluster for:
Can you expand on this a bit? What would a clustered configuration look like? What is the âclusteringâ method, other than just separate instances behind a load balancer? I assume the instances would share SQL and Redis databases (themselves being clustered)? How should gateway connections be âallocatedâ across the LoRa Server instances? Does gateway management and deduplication just work âmagicallyâ in such a configuration, or are there other factors to consider?
The easiest is to run the LoRa Gateway Bridge on each gateway. In that case the HA is handled by the MQTT broker (LoRa Gateway Bridge <> MQTT Broker <> LoRa Server
). You have to refer to the MQTT broker documentation as each MQTT broker handles clustering in a different way. See for example: https://stackoverflow.com/questions/26280208/cluster-forming-with-mosquitto-broker
gRPC is a HTTP based RPC framework, so to make both LoRa Server and LoRa App Server HA, you can run multiple instances of each service and put them behind a load balancer. Both LoRa Server and LoRa App Server are designed so that you can run multiple instances, so this will not break de-duplication etc.
Please see: https://www.postgresql.org/docs/9.6/static/high-availability.html
Please see: https://redis.io/topics/sentinel
It appears LoraServer uses the âgaryburd/redigoâ Redis client? I wasnât able to find any information indicating that this client supports Sentinel. Has anyone actually setup high availability for Redis (with automatic failover). If so, how did you do it?
Thanks
Havenât tried it myself, but redigo mentions this repo, which states is a
Redis Sentinel support for redigo library.
With the lora app server behind the load balancer, what would be âas-public-serverâ for each one of them. Do we use the serverâs hostname or load-balancers?
The hostname of the loadbalancer as that is how you want LoRa Server to connect to LoRa App Server.
Lets assume we have 2 NS running and talking to same App Server.and some of the devices activated with NS1 and others with NS2:
Please note that none of the NS instances have a local state. All states are stored in either Redis or PostgreSQL.
So is it correct to say that all my network servers will behave as home network servers for all the devices? Is it that each NS in the cluster is exact replica of the other and all connected to same set of MQTT topics?
Yes that is correct.
Hi brocaar,
iâm trying to put HA on LoRa, the port that network server have in commons with application server are: 1883, 8001, 8003. Are these ports that mus be set with load balancer? Thanks
Hi, brocaar
does this means these loraserver instances have the same DSN to share Devices and use the same band region?
I want to make the lora-server highly available.
I have read this topic about High Availability:
I donât know the detail steps to achieve it. Has anyone ever done this ďźCould you share me some resources like a guide book ?
Thanks !
This isnât directly related to loraserver but more of a common problem. Thereâs plenty of information out there, and whatâs useful will depend on your particular needs. But if itâs only a quick start to load balancing you need, you could check this out on how to achieve it easily using Nginx as reverse proxy and balancer.
Please use this as a way of getting started and do not follow it blindly. This is just a basic example on this topic, and you should do your research to implement this correctly. Consider also that this only deals with load balancing, and running clusters of Postgres and Redis is a whole other thing. Be sure to check the links Orne posted, and for Postgres give this a look too.
Thank you very very much for your answer and your tips !
Bringing this topic up again. So load balancing HTTP and UDP services is fairly straightforward. What are people doing for preventing duplication of messages when multiple of the same service are each connected to MQTT?
In default configurations, each MQTT subscriber to the same topic will receive the same message, so if a message comes in from a gateway and makes its way into MQTT, wonât each of 2 NSes both receive the message and process it? Similarly, will a message published to #/tx
be picked up twice and queued for a device?
Are people using shared subscription support of advanced brokers like VerneMQ or Hive? Are there any other reasonable approaches?
Thanks.
LoRa Server / and LoRa App Server will handle the de-duplication of MQTT duplicates for you chirpstack-application-server/internal/integration/mqtt/mqtt.go at master ¡ brocaar/chirpstack-application-server ¡ GitHub
hi at all, some more question to HA
If i use a Loadbalancer and two Loraservers - is this correct?
thx, regards, sil