Gonzalo Ayuso | Web Architecthttps://gonzalo123.com
Mon, 14 Jan 2019 14:59:28 +0000 en
hourly
1 http://wordpress.com/https://secure.gravatar.com/blavatar/36e61d26ff749c3064487f5bc33ff092?s=96&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fbuttonw-com.pngGonzalo Ayuso | Web Architecthttps://gonzalo123.com
Using cache buster with OpenUI5 outside SCPhttps://gonzalo123.com/2019/01/14/using-cache-buster-with-openui5-outside-scp/
https://gonzalo123.com/2019/01/14/using-cache-buster-with-openui5-outside-scp/#respondMon, 14 Jan 2019 14:59:20 +0000http://gonzalo123.com/?p=74955When we work with SPAs and web applications we need to handle with the browser’s cache. Sometimes we change our static files but the client’s browser uses a cached version of the file instead of the new one. We can tell the user: Please empty your cache to use the new version. But most of the times the user don’t know what we’re speaking about, and we have a problem. There’s a technique called cache buster used to bypass this issue. It consists on to change the name of the file (or adding an extra parameter), basically to ensure that the browser will send a different request to the server to prevent the browser from reusing the cached version of the file.

When we work with sapui5 application over SCP, we only need to use the cachebuster version of sap-ui-core

With this configuration, our framework will use a “cache buster friendly” version of our files and SCP will serve them properly.

For example, when our framework wants the /dist/Component.js file, the browser will request /dist/~1541685070813~/Component.js to the server. And the server will server the file /dist/Component.js. As I said before when we work with SCP, our standard build process automatically takes care about it. It creates a file called sap-ui-cachebuster-info.json where we can find all our files with one kind of hash that our build process changes each time our file is changed.

It works like a charm but I not always use SCP. Sometimes I use OpenUI5 in one nginx server, for example. So cache buster “doesn’t work”. That’s a problem because I need to handle with browser caches again each time we deploy the new version of the application. I wanted to solve the issue. Let me explain how I did it.

Since I was using one Lumen/PHP server to the backend, my first idea was to create a dynamic route in Lumen to handle cache-buster urls. With this approach I know I can solve the problem but there’s something that I did not like: I’ll use a dynamic server to serve static content. I don’t have a huge traffic. I can use this approach but it isn’t beautiful.

My second approach was: Ok I’ve got a sap-ui-cachebuster-info.json file where I can see all the files that cache buster will use and their hashes. So, Why not I create those files in my build script. With this approach I will create the full static structure each time I deploy de application, without needing any server side scripting language to generate dynamic content. OpenUI5 uses grunt so I can create a simple grunt task to create my files.

]]>https://gonzalo123.com/2019/01/14/using-cache-buster-with-openui5-outside-scp/feed/0gonzalo123Playing with microservices, Docker, Python an Namekohttps://gonzalo123.com/2018/12/17/playing-with-microservices-docker-python-an-nameko/
https://gonzalo123.com/2018/12/17/playing-with-microservices-docker-python-an-nameko/#respondMon, 17 Dec 2018 12:30:47 +0000http://gonzalo123.com/?p=74838In the last projects that I’ve been involved with I’ve playing, in one way or another, with microservices, queues and things like that. I’m always facing the same tasks: Building RPCs, Workers, API gateways, … Because of that I’ve searching one framework to help me with those kind of stuff. Finally I discover Nameko. Basically Nameko is the Python tool that I’ve been looking for. In this post I will create a simple proof of concept to learn how to integrate Nameko within my projects. Let start.

The POC is a simple API gateway that gives me the localtime in iso format. I can create a simple Python script to do it

We also can create a simple Flask API server to consume this information. The idea is create a rpc worker to generate this information and also generate another worker to send the localtime, but taken from a PostgreSQL database (yes I know it not very useful but it’s just an excuse to use a PG database in the microservice)

]]>https://gonzalo123.com/2018/12/17/playing-with-microservices-docker-python-an-nameko/feed/0117746gonzalo123Monitoring the bandwidth with Grafana, InfluxDB and Dockerhttps://gonzalo123.com/2018/11/26/monitoring-the-bandwidth-with-grafana-influxdb-and-docker/
https://gonzalo123.com/2018/11/26/monitoring-the-bandwidth-with-grafana-influxdb-and-docker/#respondMon, 26 Nov 2018 11:04:59 +0000http://gonzalo123.com/?p=74526Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).

Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.

To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.

Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)

https://gonzalo123.com/2018/11/26/monitoring-the-bandwidth-with-grafana-influxdb-and-docker/feed/0internetgonzalo123Working with SAPUI5 locally (part 3). Adding more services in Dockerhttps://gonzalo123.com/2018/10/08/working-with-sapui5-locally-part-3-adding-more-services-in-docker/
https://gonzalo123.com/2018/10/08/working-with-sapui5-locally-part-3-adding-more-services-in-docker/#commentsMon, 08 Oct 2018 13:13:34 +0000http://gonzalo123.com/?p=74550In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.

We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.

With those premises we are going to need:

Frontend: UI5 application

Backend: PHP/lumen application

Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)

Nginx server

PosgreSQL database.

RabbitMQ message broker.

As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.

]]>https://gonzalo123.com/2018/10/08/working-with-sapui5-locally-part-3-adding-more-services-in-docker/feed/1gonzalo123Working with SAPUI5 locally (part 2). Now with dockerhttps://gonzalo123.com/2018/09/03/working-with-sapui5-locally-part-2-now-with-docker/
https://gonzalo123.com/2018/09/03/working-with-sapui5-locally-part-2-now-with-docker/#commentsMon, 03 Sep 2018 12:13:23 +0000http://gonzalo123.com/?p=74519In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

And that’s all. You can see the all the source code in my github account

]]>https://gonzalo123.com/2018/09/03/working-with-sapui5-locally-part-2-now-with-docker/feed/2gonzalo123Working with SAPUI5 locally and deploying in SCPhttps://gonzalo123.com/2018/08/20/working-with-sapui5-locally-and-deploying-in-scp/
https://gonzalo123.com/2018/08/20/working-with-sapui5-locally-and-deploying-in-scp/#commentsMon, 20 Aug 2018 14:16:16 +0000http://gonzalo123.com/?p=74473When I work with SAPUI5 projects I normally use WebIDE. WebIDE is a great tool but I’m more confortable working locally with my local IDE.
I’ve this idea in my mind but I never find the time slot to work on it. Finally, after finding this project from Holger Schäfer in github, I realized how easy it’s and I started to work with this project and adapt it to my needs.

The base of this project is localneo. Localneo starts a http server based on neo-app.json file. That means we’re going to use the same configuration than we’ve in production (in SCP). Of course we’ll need destinations. We only need one extra file called destination.json where we’ll set up our destinations (it creates one http proxy, nothing else).

In this project I’ll create a simple example application that works with one API server.

The build process

Before uploading the application to SCP we need to build it. The build process optimizes the files and creates Component-preload.js and sap-ui-cachebuster-info.json file (to ensure our users aren’t using a cached version of our application)
We’ll use grunt to build our application. Here we can see our Gruntfile.js

In our Gruntfile I’ve also configure a watcher to build the application automatically and triggering the live reload (to reload my browser every time I change the frontend)

Now I can build the dist folder with the command:

grunt

Deploy to SCP

The deploy process is very well explained in the Holger’s repository
Basically we need to download MTA Archive builder and extract it to ./ci/tools/mta.jar.
Also we need SAP Cloud Platform Neo Environment SDK (./ci/tools/neo-java-web-sdk/)
We can download those binaries from here

Then we need to fulfill our scp credentials in ./ci/deploy-mta.properties and configure our application in ./ci/mta.yaml
Finally we will run ./ci/deploy-mta.sh (here we can set up our scp password in order to input it within each deploy)

]]>https://gonzalo123.com/2018/08/20/working-with-sapui5-locally-and-deploying-in-scp/feed/2opa5gonzalo123Playing with Grafana and weather APIshttps://gonzalo123.com/2018/07/23/playing-with-grafana-and-weather-apis/
https://gonzalo123.com/2018/07/23/playing-with-grafana-and-weather-apis/#commentsMon, 23 Jul 2018 11:18:11 +0000http://gonzalo123.com/?p=74403Today I want to play with Grafana. Let me show you my idea:

I’ve got a Beewi temperature sensor. I’ve been playing with it previously. Today I want to show the temperature within a Grafana dashboard.
I want to play also with openweathermap API.

Fist I want to retrieve the temperature from Beewi device. I’ve got a node script that connects via Bluetooth to the device using noble library.
I only need to pass the sensor mac address and I obtain a JSON with the current temperature

]]>https://gonzalo123.com/2018/07/23/playing-with-grafana-and-weather-apis/feed/1Grafana_-_Weathergonzalo123Playing with Docker, MQTT, Grafana, InfluxDB, Python and Arduinohttps://gonzalo123.com/2018/06/04/playing-with-docker-mqtt-grafana-influxdb-python-and-arduino/
https://gonzalo123.com/2018/06/04/playing-with-docker-mqtt-grafana-influxdb-python-and-arduino/#commentsMon, 04 Jun 2018 12:33:16 +0000http://gonzalo123.com/?p=74378I must admit this post is just an excuse to play with Grafana and InfluxDb. InfluxDB is a cool database especially designed to work with time series. Grafana is one open source tool for time series analytics. I want to build a simple prototype. The idea is:

One Arduino device (esp32) emits a MQTT event to a mosquitto server. I’ll use a potentiometer to emulate one sensor (Imagine here, for example, a temperature sensor instead of potentiometer). I’ve used this circuit before in another projects

One Python script will be listening to the MQTT event in my Raspberry Pi and it will persist the value to InfluxDB database

I will monitor the state of the time series given by the potentiometer with Grafana

I will create one alert in Grafana (for example when the average value within 10 seconds is above a threshold) and I will trigger a webhook when the alert changes its state

One microservice (a Python Flask server) will be listening to the webhook and it will emit a MQTT event depending on the state

Another Arduino device (one NodeMcu in this case) will be listening to this MQTT event and it will activate a LED. Red one if the alert is ON and green one if the alert is OFF

The server
As I said before we’ll need three servers:

MQTT server (mosquitto)

InfluxDB server

Grafana server

We’ll use Docker. I’ve got a Docker host running in a Raspberry Pi3. The Raspberry Pi is a ARM device so we need docker images for this architecture.

Grafana
In grafana we need to do two things. First to create one datasource from our InfluxDB server. It’s pretty straightforward to it.

Finally we’ll create a dashboard. We only have one time-serie with the value of the potentiometer. I must admit that my dasboard has a lot things that I’ve created only for fun.

Thats the query that I’m using to plot the main graph

SELECT
last("value") FROM "pot"
WHERE
time >= now() - 5m
GROUP BY
time($interval) fill(previous)

Here we can see the dashboard

And here my alert configuration:

I’ve also created a notification channel with a webhook. Grafana will use this web hook to notify when the state of alert changes

Webhook listener
Grafana will emit a webhook, so we’ll need an REST endpoint to collect the webhook calls. I normally use PHP/Lumen to create REST servers but in this project I’ll use Python and Flask.

We need to handle HTTP Basic Auth and emmit a MQTT event. MQTT is a very simple protocol but it has one very nice feature that fits like hat fits like a glove here. Le me explain it:

Imagine that we’ve got our system up and running and the state is “ok”. Now we connect one device (for example one big red/green lights). Since the “ok” event was fired before we connect the lights, our green light will not be switch on. We need to wait util “alert” event if we want to see any light. That’s not cool.

MQTT allows us to “retain” messages. That means that we can emit messages with “retain” flag to one topic and when we connect one device later to this topic, it will receive the message. Here it’s exactly what we need.

Finally the Nodemcu. This part is similar than the esp32 one. Our leds are in pins 4 and 5. We also need to configure the Wifi and connect to to MQTT server. Nodemcu and esp32 are similar devices but not the same. For example we need to use different libraries to connect to the wifi.

This device will be listening to the MQTT event and trigger on led or another depending on the state

]]>https://gonzalo123.com/2018/06/04/playing-with-docker-mqtt-grafana-influxdb-python-and-arduino/feed/3dashboardgonzalo123Happy logins. Only the happy user will passhttps://gonzalo123.com/2018/05/07/happy-logins-only-the-happy-user-will-pass/
https://gonzalo123.com/2018/05/07/happy-logins-only-the-happy-user-will-pass/#commentsMon, 07 May 2018 12:11:14 +0000http://gonzalo123.com/?p=74422Login forms are bored. In this example we’re going to create an especial login form. Only for happy users. Happiness is something complicated, but at least, one smile is more easy to obtain, and all is better with one smile :). Our login form will only appear if the user smiles. Let’s start.

I must admit that this project is just an excuse to play with different technologies that I wanted to play. Weeks ago I discovered one library called face_classification. With this library I can perform emotion classification from a picture. The idea is simple. We create RabbitMQ RPC server script that answers with the emotion of the face within a picture. Then we obtain on frame from the video stream of the webcam (with HTML5) and we send this frame using websocket to a socket.io server. This websocket server (node) ask to the RabbitMQ RPC the emotion and it sends back to the browser the emotion and a the original picture with a rectangle over the face.

Frontend

As well as we’re going to use socket.io for websockets we will use the same script to serve the frontend (the login and the HTML5 video capture)

Here we’ll connect to the websocket and we’ll emit the webcam frame to the server. We´ll also be listening to one event called ‘response’ where server will notify us when one emotion has been detected.

]]>https://gonzalo123.com/2018/05/07/happy-logins-only-the-happy-user-will-pass/feed/1codegonzalo123Opencv and esp32 experiment. Moving a servo with my face alignmenthttps://gonzalo123.com/2018/04/09/opencv-and-esp32-experiment-moving-a-servo-with-my-face-alignment/
https://gonzalo123.com/2018/04/09/opencv-and-esp32-experiment-moving-a-servo-with-my-face-alignment/#respondMon, 09 Apr 2018 12:16:01 +0000http://gonzalo123.com/?p=74370One saturday morning I was having a breakfast and I discovered face_recognition project. I started to play with the opencv example. I put my picture and, Wow! It works like a charm. It’s pretty straightforward to detect my face and also I can obtain the face landmarks. One of the landmark that I can get is the nose tip. Playing with this script I realized that with the nose tip I can determine the position of the face. I can see if my face is align to the center or if I move it to one side. As well as I have a new iot device (one ESP32) I wanted to do something with it. For example control a servo (SG90) and moving it from left to right depending on my face position.

First we have the main python script. With this script I detect my face, the nose tip and the position of my face. With this position I will emit an event to a mqtt broker (a mosquitto server running on my laptop).

Now another Python script will be listening to mqtt events and it will trigger one event with the position of the servo. I know that this second Python script maybe is unnecessary. We can move its logic to esp32 and main opencv script, but I was playing with mqtt and I wanted to decouple it a little bit.