ADR algorithm and configuration


We are using the latest release and started to focus in on in the field testing of devices. We seem to have run into an issue with the ADR design and I feel like I’m missing a configuration.

Basically, we are using US902 and our nodes are capped at 20dBM. The only configurations that I’m aware of for ADR are the installation_margin (defaulted to 10 and unchanged in our system) and the min DR/max DR values.

The problem is that the loop seems to unroll as adjust TX power levels first and foremost and then I guess SF. I’ve watched the server request our devices to try to go to 22, 24, 26, 28 dBM even though they don’t have this capability. As I’d expect when this happens you can watch the SNR just continue to go down because the nodes can’t adjust until eventually you start dropping packets. For some reason, I’ve never seen our system try to adjust SF to anything other then 7.

How does the ADR loop work? How do I tell the system that our nodes can’t go above 20 dBM? What is installation margin and when would you expect the server to signal a SF change instead of just TX power?



Hi Patrick, to describe the ADR engine in short:

It calculates the link budget when receiving the uplink based on the SNR and the min. SNR required for the used spreading-factor. When that link budget is calculated, it subtracts the installation-margin from it. That value divided by 3 defines the number of steps.

Given the number of steps is positive, it means it can increase the data-rate (decrease the spreading-factor) by that number of steps until it reaches the max. DR.

When there are steps left, then it will use these to decrease the TXPower. E.g. for EU this is:


Currently when the number of steps is negative, it will increase the TXPower if possible. It will never decrease-the data-rate! See also:

Please note that a device should ACK when the requested TXPower is not implemented (LoRaWAN 1.0.2 spec):


Interesting. Just out of curiosity, why will it never decrease the data-rate? And with that in mind, what determines the minimum data rate and how/when is that set?


The min. DR is currently not used by the implemented algorithm but is specified as a device-profile field by the LoRaWAN Backend Interface specification.

LoRa Server never decreases the DR as this could result into a domino effect. E.g. when your network is dense, then lowering the data-rate on one device will impact other devices so other devices might also need to change to a lower data-rate (following the ADR algorithm), and so on…

Note that devices have an automatic data-rate decay implemented so in case of disconnection from the network, they will lower their data-rate until they are connected again.


Yes, that’s what I suspected and that makes sense. Does the device decay algorithm ever go below the DR set in the device-profile? In other words, is the device-profile DR the “floor” or can a device decay beyond that rate if it disconnects?


Yes the device could go below that value. I think it goes to the lowest data-rate possible. Failing that, it goes back into join-state. This is documented in more details in the LoRaWAN specification.


I’m looking to see if changing the installation_margin value could help some devices that have poor RSSI and SNR values. I think if I increase the installation_margin, then the DR increase will be less aggressive. Can you confirm that I am thinking of this correctly?

Based on your previous post, the formula is:

(link_budget - installation_margin)/3 = number of data_rates that LoRaServer moves.

Am I understanding that correctly?


Yes, that is correct.