This short post will illustrate how we can setup a master/slave federation of Prometheus instances (apparently, we must say Prometheis) in Kubernetes. People who use kubernetes_sd are probably already familiar with the prometheus.io/scrape annotation that can be set on pod specs, as explained here. There is no specific built-in feature in Prometheus for turning on and off scrape on pods. It’s just a usage of the very generic relabeling feature. And we can do something similar for federation.

As explained on Robust Perception, federation is a common way to scale Prometheus, when writing millions of timeseries becomes not enough. This is done with a master Prometheus (or they call it the global Prometheus), which is configured to scrape the slaves. The configuration described here is kind of static: we will have one master and n slaves, n being known. It’s typical for a splitting-by-use strategy: for instance, all the database metrics will be collected by Prometheus A, all the app metrics by Prometheus B, etc. We assume that we can bind every monitored pod to one and only one Prometheus slave.

1. Setup the master

To define the Prometheus master, just create a pod with this specific configuration (prometheus.yml):

YAML

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

global:

# set whatever you like

scrape_interval: 10s

scrape_timeout: 10s

scrape_configs:

- job_name: dc_prometheus

honor_labels: true

metrics_path: /federate

params:

match[]:

-'{job="my-kubernetes-job"}'

static_configs:

- targets:

- prometheus-slaveA:9090

- prometheus-slaveB:9090

You can see the slave targets listed here. Assuming they’re all in the same namespace, the master will be able to fetch slaves named after their Service name thanks to the Kubernetes DNS service.

Note also the ‘{job=”my-kubernetes-job”}’ match[] param. It tells to federate into master all metrics matching this criteria. In my example, it’s going to be all metrics, but of course in a real-world case I would have to be smarter than that; it should just be a fine-tuned subset of metrics.

This is a slightly modified version of the examples found there. Notice how we ignore the relabeling on “__meta_kubernetes_pod_annotation_prometheus_io_scrape”, and declare a relabeling on “__meta_kubernetes_pod_annotation_prometheus_io_slave” instead. It means that every pod with an annotation prometheus.io/slave: slaveA will be handled by this Prometheus instance. The operation can be repeated for every slave wanted, by just replacing slaveA with something else. In OpenShift, this can be advantageously inferred from a template parameter.

I’ve built that small program, “discomon”, as a POC for spawning Grafana dashboards out of Prometheus metrics that runs over Kubernetes (actually, it doesn’t have to run over Kubernetes, but that’s what my POC was about). If you wonder why such an 70’s sounding name, “Disco-” is for discovery and “-mon” for monitoring. But to be honest, all the discovery part is delegated to Prometheus itself.

The goal is to be able to get the relevant dashboards in Grafana as soon as Prometheus collects metrics that match some recognized patterns. When you deploy a new app, that app is discovered by Prometheus and eventually it will find metrics. Has it JVM metrics, discomon will create a JVM dashboard. Has it Vert.X metrics… Ok you understand.

Vert.X uses Dropwizard metrics under the hood, by default, when metrics are enabled (the other option being Hawkular Metrics). They can be exposed as Prometheus endpoints, as described here. It works without any pain, but has a downside: the metrics don’t make any use of Prometheus labels. Which can be annoying when you want to build dashboards or run elaborated queries.

A reason for that is Dropwizard not handling tags or labels yet. A PR exists and is merged, but not in the current release branch. It will be in the long-awaited version 4 of Dropwizard metrics.

But there’s a quite simple workaround to solve this issue: using Prometheus’ metric relabel in the scraper configuration. Here’s a short example that will translate a metric vertx_http_servers_0_0_0_0:8081_responses_2xx_total to vertx_http_servers_responses_total{code=2xx,server=0_0_0_0:8081}.

In prometheus.yml

YAML

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

# Label from HTTP code family & server name (ex from vertx_http_servers_0_0_0_0:8081_responses_2xx_total)

It’s been… wow… now 15 years that the Agile Manifesto has been written. Java was 1.3 (no enum, no generic, even less lambda). We’re often tired of hearing that word – AGILE – especially when it comes from recruiters or marketers trying to convince how cool their company is. But the thing is, it’s not cool anymore. In the past few years, more and more people have announced its death, with some well-argued writings. Probably, as developers and tech-addicts, we don’t like old things. And agile is becoming old.

Of course, nobody says that everything in the manifesto must be thrown away. In Agile is Dead, Matthew Kern reports many other articles, or just claims, that Agile was dead – and notably from some of the agile manifesto founders themselves. From the arguments against agile, I want to distinguish actual arguments against the method from the observation – and I cannot disagree – that the “agile” term is over-marketized, its meaning being diluted in the crowd of the context-specific interpretations. But it shouldn’t affect in any way the pertinence of the original manifesto.

So, let’s focus on the real issues with agile. And I’m only focused on the original agile manifesto, not the other emanations, like SCRUM. I cannot pretend to have a large enough overview of how agile can be used in any company. I even doubt that anybody could do. Maybe the whole point is there: defining and promoting a methodology implies decontextualizing it, with the risk of turning ideas into dogma. To save agile from death, maybe the solution is to turn it back from dogma to just ideas that we should accept, reject or arrange for a given context. “Working software is the primary measure of progress”, they say. This is maybe the most controversial point. Companies may need (not just want, but really need) to have a clear plan about what’s going to happen. Not just vague estimations, but accurate plans. Think about a small, mid-aged company, losing market shares, losing money, on a very defensive strategy (nb: I am not talking about where I work). They can’t be satisfied with a “it’s done when it’s done” plan. Guarantees might be vital.

Now, let’s focus on my very specific case: the company where I work. We’re currently not agile, and this is in my opinion a good example to see how turning agile would benefit or not to our R&D processes. So let’s see one by one the 12 principles behind the manifesto:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

This sounds kind of obvious, satisfying the customer is of course an objective (although not the only one), and this is done through the delivery of valuable software. The mention of “continuous delivery” brings confidence between actors, however I would be more cautious about the necessity of earliness, I think it’s very optional. It could even be confusing or worrying for the customer. In the very first stage I think trust must be established by communication rather than by early prototyping.

Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.

Like for the “measure of progress” stated above, companies may need to have plans, and stick as much as possible to this plan. The cost of changing cannot be ignored for them. It’s a privilege of few companies. From the developer standpoint, a change can be perceived very positively as well as negatively, but we’re ok, we can manage that. We can change pieces of code once, twice, etc. but not repeatedly. We never like to throw our code away, but will accept when it’s well argued. However too much instability can cause distrust of the managers and the company. I think this rule should be moderated – or is a reason for choosing not being agile.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Apart from the early stages which I discussed in the first point, getting frequent feedback is important and frequent delivery is a way to achieve this, although surely not the only one. The down side is that the amount of cycles for developers also increases the amount of redundant work for other services such as QA.

Business people and developers must work together daily throughout the project.

Working together with business is fine, understanding each other is crucial. But daily? Really? Why the hell? It’s just going to create useless noise, make us losing focus on what’s important right here right now. Maybe “weekly” is fine, or even “monthly”. Wait a minute… maybe “when it’s needed” is fine, and up to the manager and/or team. This point shows a lack of trust in the developers.

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

I think this is true even for non-agile teams. Trust is a key factor, whatever the organization is.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Hmm, yes, probably. But does it means that remote working should be avoided as much as possible? Fully remote positions are impossible in agile teams? This is without doubt one of the most outdated point of the agile manifesto. 15 years ago, working remotely was kind of exceptional, but it’s now so common. Some big companies work extensively with full remote positions. They managed to elaborate theories to make it work with agile methods. It surely decreases the efficiency of communications, but I think it can be an acceptable compromise in regards to the benefits.

Working software is the primary measure of progress.

I’ve begun to discuss this above. In an ideal world ruled by developers, it would be fine. “When will you deliver that feature?”, they ask. “When it’s done”, you’d answer. I’d love to live in that world. Unfortunately I’ve never seen it. People need plans, companies need strategy. I guess some large companies are able to work that way – at least, for less important projects that could not compromise their position. Or startups having an aggressive strategy, taking risks to grow. But it wouldn’t work for every company – not the ones I know.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

I am not 100% sure of this. In my work we don’t have a constant pace. And I was frustrated with that in a first place: I was feeling times to times underused, bored. Then I found it has some benefits. It’s time you can spend thinking, improving your code, improving the product with very little work, taking initiatives. It’s kind of fresh air blowing in the open space, something useful to increase your creativity. But, on the other hand, there’s definitely some waste of time. It should just be under control.

Continuous attention to technical excellence and good design enhances agility.

Simplicity – the art of maximizing the amount of work not done – is essential.

This is my credo, although I must admit this is controversial – or at least, it’s not always the best way. I tend to be more “KISS” whereas my team-mates tend to be more “design for change”, with layers and layers… I love baklava, but not in my code. We had some interesting debates in the team. There are pros and cons, and we cannot totally eliminate the “design for change” method in the sake of simplicity. When you start building a heavy application and you know it will have to extend in some ways, but you don’t know yet exactly in which ways… then, sometimes it’s good to prepare for future. Yes, there’ll be refactors because your “design for change” was not totally perfect. But those little refactors will probably be times less time-consuming than a big-scale refactor due to KISSing a little bit too much.

In my opinion, the “simplicity” rule should be followed when you build light modules, micro-services, things on which few other modules will depend. But when it comes to building some more critical module, on which many modules will depend, better to think a little bit to the future before going to the most simple implementation, unless you’re fine to build big balls of mud.

The best architectures, requirements, and designs emerge from self-organizing teams.

That’s something missing for us. Our global architecture is top-bottom designed, sometimes neglecting the ideas that could emerge from individualities. I am not saying that the developers are like bots – executors. They have their perimeter of freedom. But it’s constrained in a limited scope – like feature scope – and doesn’t encompass the most global, the most critical parts of the software design. The developers should be involved in the whole conception stages, including for the whole architecture. For the company standpoint, it maximizes the chances to see emerging the best possible solution. For the developer standpoint, it avoids frustration of being underestimated. It doesn’t mean that every single developer should attend to every meeting, that would be non-sense. Groups can be formed on a volunteering basis. But individual knowledge and ideas shouldn’t be ignored.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

It suggests we’ve build processes, and must sometimes adjust them, so we create a process for managing processes… wait… what? One Process To Rule Them All. Maybe we’ve gone too far.

So where does it bring us? Does “agile” still mean something if we pick elements from its manifesto à la carte? Why does everybody announce agile’s death nowadays? Actually, I don’t think that the manifesto is really outdated, apart from one or two things. Most of its ambitions are valuable, still today. The majority of the critics are on Scrum, that is, an attempt to materialize agile concepts through a variety of processes. Scrum exists because it took agile as a dogma, and built plenty of processes upon that. If the dogma turns out to be incorrect regarding some context, everything falls down. As Erik Meier says in his hilarious talk One Hacker Way, we end on speaking about code more than we code. The majority of the processes in, I think, most companies, are in place because at some point, some human mistakes were done. Processes are there to replace trust. Remember point number 5? TRUST. A good company, with talented developers, should in my opinion avoid scrum and consider agile as a non-monolithic reference.

The manifesto itself must be deconstructed to fit one’s need, given a proper context. But what is “Agile”, in the end? The manifesto does not define agile. We all want to be agile, but we don’t adhere to the whole manifesto. To me, agile in software development context is a multi-objective optimization : maximizing quality and rapidity and minimizing processes. Which is, I agree, a delicate optimization.

As stated in my previous post, I started Mipod.X as a Vert.X application programmed in Java. There’s the choice to use Maven or Gradle as build management tool, I use the one I know the best, which is maven. Vert.X provides configuration samples for Maven and Gradle, so I just followed the documentation which is well written. Basically Vert.X expects a “fat-jar” to be provided.

I. Vert.X with Maven

For now at early development stage, Mipod.X consists on 1 parent maven module and 4 children modules: application, data-model, mpd-connector and web

1

2

3

4

5

mipod.x

|---application

|---data-model

|---mpd-connector

|---web

Obviously, the entry-point of the application is in application. So let’s check this one first:

In pom.xml, I define the application entry point as a variable (the class must be a Verticle):

The shade plugin will basically pick up every dependencies and aggregate them to create a single jar. So I have to add all my dependencies in this pom:

XHTML

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

<dependencies>

<dependency>

<groupId>com.jotak.mipod</groupId>

<artifactId>web</artifactId>

</dependency>

<dependency>

<groupId>com.jotak.mipod</groupId>

<artifactId>mpd-connector</artifactId>

</dependency>

<dependency>

<groupId>com.jotak.mipod</groupId>

<artifactId>data-model</artifactId>

</dependency>

<dependency>

<groupId>io.vertx</groupId>

<artifactId>vertx-core</artifactId>

</dependency>

</dependencies>

And there we are! A maven install on mipod.x will generate (and install) the fat-jar.

II. Writing some Javascript and ReactJS

Integrating ReactJS or any other Javascript with Vert.X is fairly easy. As described in the documentation, all static resources are taken by default from src/main/resources/webroot. So I create this folder in my web maven module, and add an index.html plus some external dependencies such as react, react-dom, sockjs and vertxbus.js. Vertxbus.js allows communication with the Vert.X event bus through sockjs (websockets). It’s as simple as:

JavaScript

1

2

3

4

5

6

7

8

9

10

vareventBus=newvertx.EventBus(window.location+"eventbus");

eventBus.onopen=function(){

eventBus.registerHandler("info",function(evt){

varmain=document.getElementById("main");

varnode=document.createElement("p");

node.innerHTML=evt.line;

main.appendChild(node);

});

eventBus.publish("init",{});

};

This code listen to “info” messages on the event bus and write it in the HTML DOM, assuming there’s an element with id “main”. It also publishes an “init” event on the event bus.

To begin with React, we can add a dependency on babel in browser, as explained here. So I insert in my index.html the example code:

1

2

3

4

5

6

<script type="text/babel">

ReactDOM.render(

<h1>Hello,world!</h1>,

document.getElementById('example')

);

</script>

and we’ve got our first JSX lines of code working. However this is temporary and I’ll describe how to switch from babel to TypeScript.

On the java side, a Verticle will initialize the HTTP server and EventBus permissions. It’s quite easy once again:

I won’t talk about programming with Vert.X right here. Just notice how the server router is configured to accept in and out event bus messages from and to the client.

III. Switching to TypeScript

Babel is pretty cool, but I really miss static typing in my Javascript. So TypeScript is definitely the way to go for that (well, it could have been PureScript as well. I guess a similar process can be done with grunt-purescript). The goal is to integrate TypeScript in the maven build process. I’ll use Grunt for this, which you probably already know if you’re familiar with the Javascript ecosystem. It involves a couple of other tools to be properly configured:

Nodejs and its package.json. Nodejs will only be used in the building process and is not required at runtime.

||---resources/webroot(staticHTML,external dependencies but no TypeScript here)

|

|---typescript

|---main(TypeScript sources)

|---Gruntfile.js

|---package.json

To set up this workflow, first things are to install nodejs, npm and create your package.json file. You don’t need to create a full package.json, as it will only be used in your workflow, it doesn’t have to be published publicly.

You can already run a npm install from the folder that contains package.json, so that you’ll get all the dependencies ready to be used.

Then you must configure the Gruntfile.js:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

module.exports=function(grunt){

grunt.initConfig({

project:grunt.file.readJSON('package.json'),

dir:{

"source_ts":"main",

"target":"target/main"

},

ts:{

compile:{

src:"<%=dir.source_ts%>/**/*.ts",

out:"<%=dir.target%>/<%=project.name%>.js",

options:{

target:"es5",

declaration:false

}

}

}

});

grunt.loadNpmTasks("grunt-ts");

grunt.registerTask("default",['ts:compile']);

};

With this configuration, all TypeScript files in “<some root>/main” will be transpiled into a single file “<some root>/target/main/mipod.x.js“. I’ll explain later how “<some root>” is set, this is part of the grunt-maven-plugin.

There’s an important remark to be said here: TypeScript’s file concatenation can only work if there’s no external module links in the source files. It took me some time to find out this issue in my code, because I used to write TypeScript for nodejs with lots of external module dependencies. So I removed all import or require references from my client-side TypeScript files, and used Internal modules instead. This is well described on this blog post. Basically, you must use module namespaces and ///<reference… />. Setting “verbose: true” as ts:compile options in the Gruntfile helped me a lot.

Once this is done, there’s just two more steps to have a complete workflow: running grunt from maven, and copy the generated file at the right place.

To integrate grunt into maven, I use the grunt-maven-plugin. This is done through web‘s pom.xml:

What does it do? To sum up, it first takes all sources from “sourceDirectory/jsSourceDirectory”, that is “mipod.x/web/typescript” and copy them to “gruntBuildDirectory“, that is “mipod.x/web/target/grunt“. Then, grunt is invoked from that working directory (the “<some root>” mentioned above). Put all together, my final JS file will be “mipod.x/web/target/grunt/target/main/mipod.x.js“.

If you need to debug, you can add
<gruntOptions><gruntOption>--verbose</gruntOption></gruntOptions> in the configuration.

Next and final step is to copy the generated JS file to the right place in webroot, before the jar is actually packaged by maven. This is done through the maven-resources-plugin at compile time:

Nothing fancy here, it speaks for itself. Link it in your index.html, and you’re done.

IV. What about JSX?

… done? Hum, we haven’t done anything for JSX yet. Hopefully this is a very easy step since it is correctly handled by tsc, the TypeScript transpiler.

Since you use TypeScript, you can write some TSX files. You’ll find more information on that from Microsoft’s GitHub. Put those .tsx files in the TypeScript directory (or subdirs), that is “web/typescript/main“.

And then, just add a task in the Gruntfile to compile TSX:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

module.exports=function(grunt){

grunt.initConfig({

project:grunt.file.readJSON('package.json'),

dir:{

"source_ts":"main",

"target":"target/main"

},

ts:{

compile:{

src:"<%=dir.source_ts%>/**/*.ts",

out:"<%=dir.target%>/<%=project.name%>.js",

options:{

target:"es5",

declaration:false

}

},

compile_tsx:{

src:"<%=dir.source_ts%>/**/*.tsx",

out:"<%=dir.target%>/<%=project.name%>.jsx",

options:{

target:"es5",

declaration:false,

jsx:"react"

}

}

}

});

grunt.loadNpmTasks("grunt-ts");

grunt.registerTask("default",['ts:compile','ts:compile_tsx']);

};

The task “ts:compile_tsx” will invoke tsc on all tsx files, with option “jsx: react”, and generate a single .jsx file similar to what we did for .ts/.js.

I recently attended to a presentation of the Vert.X toolkit, a powerful library based on the JVM that allows you to create reactive, scalable applications with many advantages. There’s a lot of similitude with nodejs, but I bet it’s faster and more robust thanks to decades of JVM optimizations. I was really eager to try it, but yet had to find the project it’d fit in.

Since I’m also fan of the Raspberry Pi and want to use it as a MPD server, I started to develop a web interface to control MPD. There’s already a lot of MPD clients which probably do a lot more of what I intend to do, but that’s how start new projects, right?

In the short term, I want to play around with Vert.X and ReactJS. Vert.X is polyglot, so I could write it in Groovy, Ruby or Javascript… But I’ll stick to what I know best, Java. ReactJS is Javascript but I can’t be satisfied with plain old Javascript. Babel looks like a nice solution as an ES6 implementation, but my favours go to TypeScript, which is close to ES6 with the addition of static typing, which is a huge positive point for large or medium size projects. The TypeScript transpiler is able to understand JSX syntax, so it should be fine with React.

The project is at its very beginning and I can’t tell how far it will go. It’s on GitHub.

I’ll post shortly about how I set up a build workflow with all these techs [update: it’s there].