"CODEC","error":"execution timeout"


@iegomez and @brocaar, I do appreciate your support.

Nevertheless, I share the question from @SDA , because I’m not sure to be able to build it for test.
Do you plan to release a version including it?
Or is there any easy way to build and deloy?
Or even introduce it in my v2.4.1?

Hope to read you soon… :wink:


I have created a snapshot for testing, please see: https://www.dropbox.com/sh/5aep3dj6llapaag/AAD2oUv3E-yx7ivO5YZYrFhfa?dl=0.

Looking forward to your feedback! Please note that I have changed the 10ms default to 100ms. You can generate a new config file using lora-app-server -c path/to/lora-app-server.toml configfile > /path/to/new-lora-app-server.toml.


Thanks a lot.
Snapshot deployed.
New parameter see in the new config file

I’m going to lunch tests during next night to get some relevant statistics. I’ll let you know.



Before change:

  • 1000 messages sent, one per 5 sec
  • ==> 20% go in Timeout

After change (and timeout left as 100ms):

  • 5000 messages sent, one per 5 sec
  • ==> 0.00% go in Timeout

I am now confident with Codec usage in my use case.
Nevertheless, this test does not simulate sevral messages receive in short times.
So I continue testing it in real case with many devices pushing.

We keep in touch, but again, Thanks a lot.



New test scenario:

more than 86,000 in 24H, with pseudo random timing over 3 channels, and 3 DR.
RESULT: 100% decoding success.

So I confirm that this new parameter solves my issue (and issue is back then I set this param to previous value (10ms).

@brocaar, dod you plan to release a version including it?

Thanks by advance…


Yes, I’m planning to release this this week :slight_smile: Thanks for your testing feedback!


Is there a way to see how long a custom codec is taking to execute?

Right now the only way to prove how long the proper timeout should be configured for appears to be through trial and error to see how many “execution timeout” errors you get.


You can always make a program using the same interpreter to run your script and check out how much it takes to get a rough estimation. Something like this:

package main

import (


func main() {

	var val otto.Value
	script := `
		//Write your script here. Make it return an object so everything is tested.
		function Decode(fPort, bytes) {

			var count = 0;
			var sum = 0;
			//Let's do some things.
			for(var i = 0; i < 1000; i++) {
				sum += i;

			return {
				"Flags": bytes[0],
				"Battery": bytes[1],
				"Light": bytes[2],
	script += "\n\nDecode(fPort, bytes);\n"

	vm := otto.New()
	vm.Set("bytes", []byte{1, 2, 3})
	vm.Set("fPort", 2)

	start := time.Now()

	val, err := vm.Run(script)
	if err != nil {
		fmt.Printf("js vm error: %s", err)

	if !val.IsObject() {
		fmt.Println("function must return object")

	data, err := val.Export()
	if err != nil {
		fmt.Printf("export error: %s", err)

	elapsed := time.Since(start)
	fmt.Printf("JS script took %s", elapsed)



I would also welcome a pull-request for this to show this as a debug(?) log message :slight_smile:


    "application_id": app.id,
    "duration":       time.Since(start),
}).Debug("codec: js codec completed Decode execution")


No problem, but just a quick thought: if I’m looking at it correctly, that’d involve either having to pass the app to the decode function (and thus, to other codecs too, which seems ugly) or have CustomJS hold a reference to the app or its id; or checking on the calling site for app.PayloadCodec == CustomJSType, with the added downside of timing the whole DecodeBytes call, not just the JS script running time.


You’re right. Maybe this log could be added to func (a *ApplicationServerAPI) HandleUplinkData as

    "application_id": app.id,
    "codec":          app.PayloadCodec,
    "duration":       time.Since(start),
}).Debug("payload codec completed Decode execution")

Then it would log not only the JS codec, but all codecs :slight_smile: