How does loraserver choose which gateway to use?

Hello there,

From the title I think my question is pretty clear. If I have several gateways, how does the lora server choose which one to use ? Does it record the gateway that was reached by the end-device for the first join request ? Will this choice be re-evaluated over time ? I didn’t find any entry in the documentation about this.

We ran into an edge case where we first connected one gateway and started communicating with end-devices. We then understood its location was not optimal, so we connected another gateway in a better location, and unplugged the first one. But it seemed that frames destinated to the end-devices that have been reached succesfully by the first gateway never passed from the second gateway. I also removed the first gateway through the lora app server, but still the same result. I think the loraserver is always using the first one. So for the moment my workaround is to keep both gateways ! Is this normal behaviour ?

Thanks in advance
Dam

After the de-duplication of the uplink it will use the gateway with the best SNR (and after a certain SNR the best RSSI) for the downlink in one of the RX windows. This set of gateways is stored and in case of Class-C downlink it will use the best gateway again from this set (until you do a new uplink after which it will generate a new set of gateways “close” to your device).

1 Like

Hello,

I understand the general algorithm, but what is supposed to happen if a gateway is unplugged and/or deleted through lora-app-server ? Shouldn’t this set of “preferred” gateways be re-evaluated ?

Let me explain because we have a very specific scenario:

  • connect one gateway and communicate with a group of end-devices
  • connect a second gateway (that is in theory reachable by all the end-devices of this first group) and communicate with a second group of end-devices
  • disconnect the first gateway

At this point we were not able to communicate again with the first group of end-devices, even if the second gateway was in their range.

How are we supposed to manage this scenario ? I see several options :

  • reboot the end-devices manually so that JoinRequests are sent again and a new session made
  • we have a mechanism to restart requesting join after 1 day. Maybe this delay is too big ?
  • I don’t know if something can be made in the lora-server about this. Maybe if the preferred gateway set is empty we can try with other gateways ?

Please tell me what do you think about this.
Thanks !

1 Like

Is this Class-A, Class-B or Class-C?

We have Class-C devices

In that case the downlink gateway is defined by the gateways receiving the most recent uplink. Deleting a gateway or disconnecting a gateway does not have any effect on this (currently).

What you could do is send an uplink message periodically. I think in general this is a good idea to validate the connectivity of the device.

Do you think the set of gateways should be updated when deleting or disconnecting a gateway ?
What is the period you would advise for the connectivity validation ?

Thanks

Hi guys, Matty’s here.
When Lora Server (LS) assigns a gateway (GW) to speak to a specific end-device, is there any addressing or something in MAC stopping that end-device from talking to any GW other than the assigned one? The answer is very likely “No” and as long as the MAC addresses that end-device it has to respond accordingly.
Now it raises another idea than periodically uplinking which may endanger the duty cycle requirements of end-devices. I believe the LS application can change the assigned GW when after a few attempts it doesn’t find the end-device acknowledgement. If there is a set of GW’s available it can go to the second choice or if there isn’t (like a new GW) it may try the new one. What do you think of it guys?
Thanks

hey @Matty @laurre @brocaar, we are going through a similar problem, any advise on how to change the gateway id in the device uplink, so that the downlink can be sent by the changed gateway id and not the previous gateway id which has been removed(which is the last gateway id stored in the device session)?

1 Like

This is interesting. I’ve been looking for this for a while.

Is it different in case of a Class-A device? In case of Class-A how is the gateway chosen?

Class-A: the device sends an uplink message and the AppServer/NetworkServer will send a downlink
message (if has one) by the same gateway that receives the uplink message.

What does happen if the new uplink message sent by a new gateway has a worse signal than the last uplink sent by the gateway that is not operational?

For Class-A downlink, ChirpStack will always select the best gateway from the set of gateways that received the Class-A uplink.

We are still facing issues at that point. We’ve made some controlled tests in the lab, and after disconnecting (or turning off) one gateway, the downlinks are not transmitted by the other gateway. By the way, our downlinks use to be multicasts. Is there any other condition that should be met, like the SNR must be higher in the last uplink gateway than in the original one? How could we workaround this? All devices are configured as Class-C.

For multicast ChirpStack calculates which gateways to use (based on the last received uplinks) to reach all devices. If you turn off one of your gateways, then you would have to wait for the next (Class-A) uplink, for ChirpStack to update its state.

Please note, in the next release it will be possible to explicitly define which gateways to use for multicast :slight_smile:

Firstly, thanks for your time and support! Do you have any predictions about to release of the new version?

This change was released last week :slight_smile: