How does loraserver choose which gateway to use?


Hello there,

From the title I think my question is pretty clear. If I have several gateways, how does the lora server choose which one to use ? Does it record the gateway that was reached by the end-device for the first join request ? Will this choice be re-evaluated over time ? I didn’t find any entry in the documentation about this.

We ran into an edge case where we first connected one gateway and started communicating with end-devices. We then understood its location was not optimal, so we connected another gateway in a better location, and unplugged the first one. But it seemed that frames destinated to the end-devices that have been reached succesfully by the first gateway never passed from the second gateway. I also removed the first gateway through the lora app server, but still the same result. I think the loraserver is always using the first one. So for the moment my workaround is to keep both gateways ! Is this normal behaviour ?

Thanks in advance


After the de-duplication of the uplink it will use the gateway with the best SNR (and after a certain SNR the best RSSI) for the downlink in one of the RX windows. This set of gateways is stored and in case of Class-C downlink it will use the best gateway again from this set (until you do a new uplink after which it will generate a new set of gateways “close” to your device).



I understand the general algorithm, but what is supposed to happen if a gateway is unplugged and/or deleted through lora-app-server ? Shouldn’t this set of “preferred” gateways be re-evaluated ?

Let me explain because we have a very specific scenario:

  • connect one gateway and communicate with a group of end-devices
  • connect a second gateway (that is in theory reachable by all the end-devices of this first group) and communicate with a second group of end-devices
  • disconnect the first gateway

At this point we were not able to communicate again with the first group of end-devices, even if the second gateway was in their range.

How are we supposed to manage this scenario ? I see several options :

  • reboot the end-devices manually so that JoinRequests are sent again and a new session made
  • we have a mechanism to restart requesting join after 1 day. Maybe this delay is too big ?
  • I don’t know if something can be made in the lora-server about this. Maybe if the preferred gateway set is empty we can try with other gateways ?

Please tell me what do you think about this.
Thanks !


Is this Class-A, Class-B or Class-C?


We have Class-C devices


In that case the downlink gateway is defined by the gateways receiving the most recent uplink. Deleting a gateway or disconnecting a gateway does not have any effect on this (currently).

What you could do is send an uplink message periodically. I think in general this is a good idea to validate the connectivity of the device.


Do you think the set of gateways should be updated when deleting or disconnecting a gateway ?
What is the period you would advise for the connectivity validation ?



Hi guys, Matty’s here.
When Lora Server (LS) assigns a gateway (GW) to speak to a specific end-device, is there any addressing or something in MAC stopping that end-device from talking to any GW other than the assigned one? The answer is very likely “No” and as long as the MAC addresses that end-device it has to respond accordingly.
Now it raises another idea than periodically uplinking which may endanger the duty cycle requirements of end-devices. I believe the LS application can change the assigned GW when after a few attempts it doesn’t find the end-device acknowledgement. If there is a set of GW’s available it can go to the second choice or if there isn’t (like a new GW) it may try the new one. What do you think of it guys?