Cloud service offerings like SaaS, PaaS, IDaaS, FaaS, IaaS, iPaaS, etc. became very popular. Gartner predicts further growth of cloud services revenue by 17% in 2019. More and more companies are looking to utilize cloud – public, private or hybrid – to become more competitive in dynamically changing world of requirements. With the cloud they can adjust their costs in a more agile way, scale more efficiently reducing the time to market of their products or services. AI, Big Data & Analytics or Blockchain are topics that companies would evaluate over the next few years, which without a cloud could become cumbersome and expensive to achieve.

Lets look again at metrics. This time from the perspective of their definition and usability. It was not easy for me to understand types of metrics and how to read them. Which metrics are useful and therefore should be paid attention to? This blog post will focus on Dropwizard metrics with a sample application for a bit of practice.

I have come across Apache Spark when looking for tools for an ETL process. I am a big fan of Scala, and Spark with Scala was mentioned a few times in an ETL context. Therefore, I couldn’t not to use it as a potential option.

The first conference Cloud Developer Days has ended two weeks ago. It took place 28 – 29 May 2018 @ Cracow, Poland. I haven’t come across a similar conference in my country before, i.e. the one that would be focused on cloud in general and not on a specific technology.

The main focus was on Cloud Security, Machine Learning, Artificial Intelligence, Serverless and Blockchain.

Consumer Groups

So far the following ideas have been introduced: topic, message, partition, producer, consumer and broker. By now, you should understand how Kafka stores messages on disk using commit log, topics and partitions. You should also know how a message is structured.

It’s time to introduce consumer groups, which are the missing piece of message distribution in Kafka.