I have a LoPy4 + Pysense board, and am displaying various fields sent from the device. The time series and table widgets display and update as expected when I monitor the page as the data are being sent. However, if I click on a different thing type and then click back on the Pysense thing type that I created (or log out and log back in), only single data points are displayed (the most recent). The widgets indicate that the “Last 100” should be on display, but they are not. I’m trying to figure out if this is a problem on my end or with the Telenor page. Any ideas?

This problem occurs on multiple browsers and multiple operating systems.

Your timeseries widgets are configured to display real-time data. What this means is that the web browser is subscribed to the MQTT topic of your Thing and will dynamically add new data to the graph as they arrive through the MQTT broker. You can think of it as a temporary buffer in the browser.

Once you navigate away (e.g. to another Thing Type) and then go back again, the temporary buffer is thrown away and the dashboard has to make an Elasticsearch query to fetch the previously stored data.

Now to the problem: based on the response from the Elasticsearch query (which apparently is only shown in some cases) it tries to make some arithmetical calculations such as taking the average value over multiple measurements. This fails since the type of the mapped field is set to “keyword” where it needs to be number/float (how do you take the average over keywords?). This is a known issue in MIC, where the first sent value of a resource decides the type of the mapped field in Elasticsearch.

Try this: disable real-time updates of the widget and turn aggregation off. When disabling aggregations the Elasticsearch won’t fiddle with averages. This may solve the problem but you won’t see real-time updates.

Excellent description Pontus. To avoid the problem for future use, make sure that your uplink transform returns numerical values for numeric fields (i.e. integer, float) and not strings as your current uplink transform does. Unfortunately it will not help to change the uplink transform for the current thingType now since it is, as Pontus explains, the first payload that is transformed that determines the type representation in MIC. You could of course create a new thing type with a “corrected” uplink transform and go on from there (which we highly recommend). The analysis part of MIC will for example allow you to choose resources that have numerical values. You can view the types MIC has assigned to your resources if you choose Settings->Thing Types->your thing type->Resources.

Okay, I did quite a lot of trial-and-error when I was first writing the uplink transform for these Things. I’ll try creating a new Thing Type with the uplink tranform correct from the start and see how that goes.

I turned off real-time updates and disabled aggregations on all my time series/table widgets. The widgets now re-populate without errors, sort of…

I have one Thing that (now) only has 2 widgets. The table and time series widgets now repopulate fine every time I click on the Thing.

I have another Thing with 11 widgets, one for each variable passed in the payload. When I click on this thing, some of the widgets re-populate with data, but some remain blank. If I click away and back again I get the same behaviour, except different widgets do/don’t re-populate. They’re not consistent, and the Elastic Search errors pop up again occasionally (but also not consistently).

At least this feels like progress, and I’m getting a better idea how things happen under the hood…

Okay, I’ve now created new Thing Types and ensured, from the start, that the uplink transform returns numeric variables (float/long) appropriately. I confirm this in Settings->Thing Types->your thing type->Resources.

Many of the time series/table widgets (realtime and aggregate disabled on all) now populate appropriately, but the behaviour is still inconsistent. Some tables/time series populate, some don’t. Both the elastic search and search_phase_execution_exception messages still appear occasionally as well.

I have no good answer on how to avoid sporadic “search_phase_execution_exception”. This seems to have to do with the way the Elasticsearch cluster behaves and how it is configured. It may be as simple as a temporary overload (multiple queries eating memory). There are relatively many users under the Start IoT domain and if I’m not mistaken we all share the same Elasticsearch cluster instance.