Adding the 4 channel relay board ks0212 to the MQTT universe

We just hacked a trotec dehumidifier for HerwigsObservatory. The idea was to additionally activate the dehumidifier when the difference between outside and inside humidity is above 10%. Normally there is a fan taking care of it but sometimes the differents gets to high. As there is already a raspberry pi running in the observatory for the weatherstation and the flightradar24 installation we just added the 4 channel relay board ks0212 from keyestudio. Not touching the 220V part we directly used the relay to “press” the TTL switch on the board for 0.5 seconds to turn on and off the dehumidifier. Here are the code snipped we used for this. The control is completely handled via MQTT.

Installing necessary programs and libraries

1

2

sudo apt install python python-pip python-dev

sudo pip install wiringpi paho-mqtt

For the sake of simplicity we used python and the GPIO library wiringpi. Therefore we first install the python development parts and them the python libraries for wiringpi and MQTT. As this is a dedicated hardware installation we don’t use virtualenv and directly install the library as root system wide.

The python program

Python

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

importtime

importwiringpi

importpaho.mqtt.client asmqtt

defsetup():

wiringpi.wiringPiSetup()

wiringpi.pinMode(3,1)

wiringpi.pinMode(7,1)

wiringpi.pinMode(22,1)

wiringpi.pinMode(25,1)

defshort(pin):

switch_on(pin)

time.sleep(0.5)

switch_off(pin)

defswitch_on(pin):

wiringpi.digitalWrite(pin,1)

defswitch_off(pin):

wiringpi.digitalWrite(pin,0)

defon_connect(self,client,userdata,rc):

mqclient.subscribe("sternwarte/relay/#")

defon_message(client,userdata,msg):

m=msg.topic.split("/")

pin=0

ifm[-1]=="j3":

pin=3

ifm[-1]=="j2":

pin=7

ifm[-1]=="j4":

pin=22

ifm[-1]=="j5":

pin=25

ifpin!=0:

ifmsg.payload=="on":

switch_on(pin)

ifmsg.payload=="off":

switch_off(pin)

ifmsg.payload=="press":

short(pin)

if__name__=="__main__":

setup()

mqclient=mqtt.Client(clean_session=True)

mqclient.connect("192.168.2.5",1883,60)

mqclient.on_connect=on_connect

mqclient.on_message=on_message

mqclient.loop_forever()

Again, a very simple python script, basically attaching to a (you need to change the code, there is no config) mqtt server and subscribes itself to a certain topic. Then it waits for messages and cuts off the last part of the topic to identify the relay. The naming convention is based on the relay name printed on the ks0212 pcb. As payload you can send “on“, “off” and “press“. “press” switches the relay on for half a second in order to simulate a button press as we need it for our dehumidifier.

Adding a systemd service

In order to keep the wantabe daemon up and running and also start it automatically at system start we add this service configuration file in “/lib/systemd/system/relayboard.service“:

1

2

3

4

5

6

7

8

9

10

11

12

#cat /lib/systemd/system/relayboard.service

[Unit]

Description=ks0212 Relay Board

After=multi-user.target

[Service]

Type=simple

ExecStart=/usr/bin/python/home/pi/ks0212.py

Restart=on-abort

[Install]

WantedBy=multi-user.target

Activating the service

The following lines activate the service:

1

2

3

4

sudo chmod644/lib/systemd/system/relayboard.service

sudo systemctl daemon-reload

sudo systemctl enable relayboard.service

sudo systemctl start relayboard.service

Checking the status can be done with:

1

sudo systemctl status relayboard.service

ks0212 Pinout

If you want to do some hacking with the ks0212 relay board on your own here is the pin mapping table. I used the very cool side https://pinout.xyz/pinout/wiringpi for getting the numbers:

The Idea

Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.

The docker container

The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:

percentage How much cpu load will be generated

seconds How long will the workload be active

The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.

lookbusy

I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.

The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.

The docker container is based on python latest (at this time 3.6.4). I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.

Building and pushing the docker container

1

2

docker build-tansi/lookbusy.

docker push ansi/lookbusy

Building and pushing it to hub.docker.com is straightforward and nothing special.

This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are

–name to give your cluster a name (will be very important later on)

–location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region

–workers how many worker nodes are requested

–kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.

–private-vlan which vlan for the private network should be used. Use “bx cs vlans <location>” to get the available public and private vlans

–public-vlan see private vlan

–machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types <location>” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.

This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “start_cluster.sh” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with

Starting a pod and replica set

1

kubectl run loadtest--image=ansi/lookbusy--requests=cpu=200m

We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.

Exposing the http port

Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with

1

2

3

kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)AGE

loadtest LoadBalancer172.21.3.160<pending>80:31277/TCP23m

In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with

So instead of calling our service with a official public ip address on port 80 we can use

1

http://169.47.252.96:31277

Autoscaler

Kubernetes has a build in horizontal autoscaler which can be started with

1

kubectl autoscale deployment loadtest--cpu-percent=50--min=1--max=10

In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with

1

2

3

kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

loadtest Deployment/loadtest0%/50%110123m

So right now the cpu load is 0 and only one replica is running.

Loadtest

Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with

1

curl"http://169.47.252.96:31277/?seconds=1000&percentage=80"

and check the result after some time with

1

2

3

kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

loadtest Deployment/loadtest60%/50%110623m

As we see the load increases and autoscaler kicks in. More details can obtained with the “kubectl proxy” command.

Deleting the kubernetes cluster

To clean up we could either delete all pods and replica sets and services but we could also delete the complete cluster with