The main concepts that we need to be aware when using the InfluxDB is that record of data has a time stamp, a set of tags and a measured value. This allows, for example to create a value named Temperature and tag it depending on the source sensor:

This allows to process all the data or only process data based on a certain tag or tags. Values and tags can be created on the fly without previously define them, which is a bit different from standard RDBMS engines.

Creating an InfluxDB database:
To create the database, we need to access the machine hosting the InfluxDB server and execute the command influx:

Now we have our database created and I’ve named SensorData. To make an example with the above temperature data we can do the following:

> insert Temperature,Sensor=kitchen value=22.1
ERR: {"error":"database is required"}
Note: error may be due to not setting a database or retention policy.
Please set a database with the command "use " or
INSERT INTO .
> use SensorData
Using database SensorData
>

As we can see we need first to select the database where we are going to insert data with the command use SensorData:

Also note that no DDL (data definition language) was used to create the tags or the measured value, we’ve just inserted data for our measurement with the our tags and value(s) without the need of previously define the schema.

Configuring Node-Red
Since we now have a database we can configure the InfluxDB Node Red nodes to store data onto the database:

There are two types of InfluxDB nodes, one that has an Input and Output and other that only has Input. The former is for making queries to the database where we provide on the input node the query, and on the output the results are returned. The later is for storing data only onto the database.
For both nodes we need to configure an InfluxDB server:

We need to press the Pen icon right next to the server to add or reconfigure a new InfluxDB server:

A set of credentials are required, but since I’ve yet configured security, we can just put admin/admin as username and password. In a real deployment we must activate security.

From now on it is rather simple. Referring to InfluxDB node configuration screenshot (Not the InfluxDB server configuration) we have a configuration field named Measurement. This is our measure name that we associate a value. Picking up on the above example with the Insert command it will be Temperature, for example.

Now if the msg.payload provided has input to the node is a single value, let’s say 21, this is equivalent to do:

Insert Temperature value=12

We other formats for msg.payload that allows to associate tags and measures. Just check the Info tab for the node.

Simple example:

The following flow shows a simple example of a value received through MQTT, in this case the free heap from one of my ESP8266 and its storage in InfluxDB:

A more or less standard software stack used for control, processing and displaying data, has emerged that is almost used by everyone when hacking around on Arduinos, ESP8266, Raspeberry Pi’s and other plethora of devices. This “standard” software stack basically always includes the MQTT protocol, some sort of Web based services, Node-Red and several different cloud based services like Thingspeak, PubNub and so on. For displaying data locally, solutions like Freeboard and Node-Red UI are a great resources, but they only shows current data/status, and has no easy way to see historical data.

So on this post I’ll document a software stack based on Node-Red, InfluxDB and Graphana that I use to store and display data from sensors that I have around while keeping and be able to display historical memory of data. The key asset here is the specialized time-series database InfluxDB that keeps data stored and allows fast retrieval based on time-stamps: 5 minutes ago, the last 7 days, and so on. InfluxDB is not the only Time-Series database that is available, but it integrates directly with Grafana the software that allows the building of dashboards based on stored data.

I’m running an older version of InfluxDB on my ARM based Odroid server, since a long time ago, ARM based builds of InfluxDB and Grafana where not available. This is now not the case, but InfluxDB and Grafana have ARM based builds so we can use them on Raspberry PI and Odroid ARM based boards.

So let’s start:

Setting up Node-Red with InfluxDB
I’ll not detail the Node-Red installation itself since it is already documented thoroughly everywhere. To install the supporting nodes for InfluxDB we need to install the package node-red-contrib-influxdb

cd ~/.node-red
npm install node-red-contrib-influxdb

We should now restart Node-red to assume/detect the new nodes.

Installing InfluxDB
We can go to the InfluxDB downloads page and follow the installation instructions for our platform. In my case I need the ARM build to be used on Odroid.

The InfluxDB engine is now decompressed in the newly created directory influxdb-1.2.0-1. Inside this directory there are the directories that should be copied to the system directories /etc, /usr and /var:

sudo -s
cd /home/odroid/influxdb-1.2.0-1

Copy the files to the right location. I’ve added the -i switch just to make sure that I don’t overwrite nothing.

We can now set up the automatic start up script. On the directory /usr/lib/influxdb/scripts there are scripts for the systemctl based Linux versions and init.d based versions that is my case. So all I have to do is to copy the init.sh script from that directory to the /etc/init.d and link it to my run level:

Installing Grafana
We need now to download Grafana. In my case for Odroid since it is an ARMv7 based processor, no release/binary is available.
But a ARM builds are available on this GitHub Repository: https://github.com/fg2it/grafana-on-raspberry for both the Raspberry Pi and other ARM based computer boards, but only for Debian/Ubuntu based OS’s. Just click on download button on the description for the ARMv7 based build and at the end of the next page a download link should be available:

Just a quick hack to use the Node Red dashboard to monitor some of the UPS values that is attached to My Synology NAS.

Gathering the data and feeding it to Node-Red
First I thought to do some sort of Python or NodeJS program to run the upsc command, process the output and feed it, through MQTT, to Node Red.
But since it seemed to me a bit of overkill to just process a text output, transform it to JSON and push it through MQTT by using a program, I decided that I’ll use some shell scripting, bash to be more explicit.

So all we need now is to transform the above output from that text format to JSON and feed it to MQTT.
This means that we need to put between ” the parameter names and values, replace the : by , and also we need to replace the . on parameter names to _ so that in Node Red javascript we don’t have problems working with the parameter names.

Since I’m processing each line of the output, I’m using gawk/awk that allows some text processing. The awk program is as follow:

This will at the beginning print the opening JSON bracket, then line by line the parameter name and value between ” and separated by : .
The lline variable at the first line is empty, so it prints nothing, but at the following lines it prints , which separates the JSON values.
We just need awk now to recognize parameters and values, and that is easy since they are separated by :

So if the above code is saved as procupsc.awk file, then the following command:

Node Red processing and visualization
On node red side, now is easy. We receive the above upsc JSON object as a string on msg.payload, and we use the JSON node to separate into different msg.# variables.
From here we just feed the data to charts and gauges. The code is:

To protect data residing on my Synology NAS I’ve bought and installed an UPS, an APC 700U to be exact. The trigger for buying and installing one was the loss of (some) disk data event that happened to family member external hard disk due to power loss. The data recovery cost was higher than buying an UPS and of course the lack of backups added to outcome of that event.

While no Synology NAS was involved on the above data loss, I know first hand that backups by themselves only add one layer of protection to possible data loss, and since from time to time I also have some power loss events, it was just a matter of time that my NAS might be hit by an unrecoverable power event, and, who knows, data loss.

So buying an UPS just might be a good idea…

Anyway the UPS from APC that I bought has an USB port allowing it to be connected to the Synology, which allows the UPS to be monitored and also allows the NAS to gracefully shutdown before UPS battery exhaustion. As a bonus it also allows to run an UPS monitoring server where other devices that share the same UPS power source can be notified of a power event through the network. Just keep in mind that the network switch or router must also be power protected…

Installing the UPS:
Installing the UPS is as simple as power down all devices that will connect to the UPS: Synology NAS, Odroid, external hard disks, PC base unit and network switch, and connecting an USB cable from the UPS to the back Diskstation USB ports.

After starting up, just go to DSM Control Panel and select Hardware & Power and then the UPS tab. Enable the UPS by ticking the Enable UPS support and also enable the UPS server to allow remote clients by ticking Enable UPS Network Server:

We can see by pressing the Device Information button that the UPS was correctly detected:

To end the configuration we need to press the Permitted DiskStation Devices and add the IP’s address of the devices that also have their power sources connected to the UPS and will also monitor the power status, in my case the IP of Odroid and my home PC.

And that’s it.

Setting up Arch Linux
Interestingly I dind’t found the NUT tools (Network UPS tools) on the core Arch repositories, but they are available at the AUR repository:

yaourt -S network-ups-tools nut-monitor

The above packages will install the core NUT tools and a graphical monitor.

This will allow us to download the correct version of the NodeJS binaries from the NodeJS site: NodeJS downloads.
In our case we choose the ARM7 architecture binaries, which at the current time is file: node-v6.9.2-linux-armv7l.tar.xz
So I’ve just copied the link from the NodeJS site and did a wget on the Odroid:

As an owner of an Odroid C1+ that is a great little device, it came to my knowledge it it might now be obsolete 🙂

This is because the Odroid C2 has came out: Odroid C2, and if I think that the C1+ is great, what to tell about the C2, that is around the same price point?

The specifications:

64bit S905 ARM quad-core 2GHz processor
2GB of RAM !
HDMI 2.0 with 4K support (not that it matters for me… )
Gigabit Ethernet, as the C1+
eMMC support.
Several other tid bits very similar to C1+.
Missing RTC.

So a major leap forward from the C1+ and from the RPi.

The 40$ price in Europe is no near that value, but probably around 50/55€ mark, my guess ( + P/P). Also the eMMC prices are still very high, almost, or even higher, than the price of the board itself.

Still I’m a bit wary of buying one since HardKernel (the Odroid manufacturer) only provides a 4 week warranty !? I bought mine in Europe, so I suppose it has a 2 year warranty by law.

Just for a quick reference, the following instructions detail how to install the latest Mosquitto MQTT broker with Websockets enabled on the ODroid C1+ running (L)Ubuntu release. The instructions are probably also valid for other platforms, but not tested.

1. Install the prerequisites
As the root user, install the following libraries:

apt-get update
apt-get install uuid-dev libwebsockets-dev

Probably the SSL libraries, and a few others are also needed, but in my case they where already installed.

If needed, or wanted, change also on the configuration file the logging level and destination.
For example:

# Note that if the broker is running as a Windows service it will default to
# "log_dest none" and neither stdout nor stderr logging is available.
# Use "log_dest none" if you wish to disable logging.
log_dest file /var/log/mosquitto.log
# If using syslog logging (not on Windows), messages will be logged to the
# "daemon" facility by default. Use the log_facility option to choose which of
# local0 to local7 to log to instead. The option value should be an integer
# value, e.g. "log_facility 5" to use local5.
#log_facility
# Types of messages to log. Use multiple log_type lines for logging
# multiple types of messages.
# Possible types are: debug, error, warning, notice, information,
# none, subscribe, unsubscribe, websockets, all.
# Note that debug type messages are for decoding the incoming/outgoing
# network packets. They are not logged in "topics".
log_type error
log_type warning
log_type notice
log_type information
# Change the websockets logging level. This is a global option, it is not
# possible to set per listener. This is an integer that is interpreted by
# libwebsockets as a bit mask for its lws_log_levels enum. See the
# libwebsockets documentation for more details. "log_type websockets" must also
# be enabled.
websockets_log_level 0
# If set to true, client connection and disconnection messages will be included
# in the log.
connection_messages true
# If set to true, add a timestamp value to each log message.
log_timestamp true

4. Automatic start
The easiest way is to add the following file to the init.d directory and link it to the current runlevel: