Kubernetes deployment

Hi!
I’m deploying LoRa Server on GCP, and so far all the integration with Google services are working fine.
The last thing I want to deploy is both loraserver and lora-app-server on a cluster.

So far I’ve been able to deploy loraserver using a configMap to provide the TOML file, a managed SQL instance and a managed Redis instance. It works perfectly.

I’m a bit newbie with deployments, and the interaction between loraserver and lora-app-server is a bit challenging for me.

Can someone describe me exactly how this 2 parts comunicate? Which port do I need to open?
Also, does someone knows how to accomplish this kind of comunication using Pods and services? There’s a DNS service inside the cluster but I can’t find examples of how to point one pod to another without knowing it’s (ephemeral) IP address. I’m almost there :smiley:

Thanks in advance!

This is what i have done to setup loraserver in K8s. Hope it helps.

lora-app-server -> loraserver
You can expose port 8080 outside the cluster for ui access to lora-app-server.
Once you have UI access, register a network server using <Service name of your loraserver:portNo>.
In my case it was “loraserver:8000” (loraserver being the name of a cluster IP service)
The app server also has a join server exposed on port 8003 and then an api server on 8001. You got to expose these ports as well, but not needed outside the cluster if your loraserver also runs in the same region.

loraserver -> lora-app-server
There is a toml config for join server where you can provide your <service name of app server>:<portNO>
The routing profile needs to be updated to point to your app server service name as well .

1 Like

Thank you @KarthikSubramanian2!
I’m still reading about services and DNS.

I’m gonna try right now! :smiley:

Thanks!

EDIT:

It’s done! Both app and network server up and running. Thanks @KarthikSubramanian2

1 Like

It’s me again :stuck_out_tongue:

What about scaling? Is it possible to increase the number of replicas in this scenario? How will it behave with thousands of messages per hour?

I’m heading for production environment, maybe I could write the Cloud Function in a container :thinking:

I have 3 pods of lora server and 3 pods of app server running.
So far i have not seen issues with it.
We got subscribers to the mqtt broker to perform actions on it. I guess that would be your cloud function.

Hi guys! None of you saw problems with ingress and https certificates?

We have a production deployment using Voyager Ingress as TCP ingress and TLS termination for our EMQX broker. For AppServer web UI, we use NGINX Ingress Controller also with TLS termination.

Everything works as expected, just a few time we hit this error: Empty devices list in web UI - Kubernetes deploy but only when we use the Live Frames view for a long time.

Thanks for the answer! Is there a reason for two different ingress in your cluster? Did you try voyager with app server?

Our main ingress is NGINX and it has a lot of rules… but I was never able to set up TCP on NGINX so we deployed Voyager only for TCP services.

It works fine but I’m having an issue with RAK OpenWRT based gateways. TLS handshake doesn’t work an I have not been able to fix it yet: https://forum.rakwireless.com/t/mqtt-connection-err-14/1345