Kubernetes deployment


#1

Hi!
I’m deploying LoRa Server on GCP, and so far all the integration with Google services are working fine.
The last thing I want to deploy is both loraserver and lora-app-server on a cluster.

So far I’ve been able to deploy loraserver using a configMap to provide the TOML file, a managed SQL instance and a managed Redis instance. It works perfectly.

I’m a bit newbie with deployments, and the interaction between loraserver and lora-app-server is a bit challenging for me.

Can someone describe me exactly how this 2 parts comunicate? Which port do I need to open?
Also, does someone knows how to accomplish this kind of comunication using Pods and services? There’s a DNS service inside the cluster but I can’t find examples of how to point one pod to another without knowing it’s (ephemeral) IP address. I’m almost there :smiley:

Thanks in advance!


#2

This is what i have done to setup loraserver in K8s. Hope it helps.

lora-app-server -> loraserver
You can expose port 8080 outside the cluster for ui access to lora-app-server.
Once you have UI access, register a network server using <Service name of your loraserver:portNo>.
In my case it was “loraserver:8000” (loraserver being the name of a cluster IP service)
The app server also has a join server exposed on port 8003 and then an api server on 8001. You got to expose these ports as well, but not needed outside the cluster if your loraserver also runs in the same region.

loraserver -> lora-app-server
There is a toml config for join server where you can provide your <service name of app server>:<portNO>
The routing profile needs to be updated to point to your app server service name as well .


#3

Thank you @KarthikSubramanian2!
I’m still reading about services and DNS.

I’m gonna try right now! :smiley:

Thanks!

EDIT:

It’s done! Both app and network server up and running. Thanks @KarthikSubramanian2


#4

It’s me again :stuck_out_tongue:

What about scaling? Is it possible to increase the number of replicas in this scenario? How will it behave with thousands of messages per hour?

I’m heading for production environment, maybe I could write the Cloud Function in a container :thinking:


#5

I have 3 pods of lora server and 3 pods of app server running.
So far i have not seen issues with it.
We got subscribers to the mqtt broker to perform actions on it. I guess that would be your cloud function.