A few weeks ago, I’ve created a repo on github resuming how I stay up to date with the technology: chtefi/how-i-stay-updated.

I thought it would be nice to write a blog post on that, for you to find it, and for me to go a bit more deeper.

I’ll start explaining my situation in the IT world, why I follow all the things I’m following, then I’ll list of the resources I’m using to stay up-to-date so you could get some ideas. ;-)

Feel free to comment about what YOU are using to stay up-to-date. I’m eagerly impatient to know.

Frontend, backend, what’s the difference?

Those days, I’m working as a back-end and bigdata guy. Before that, I did a lot of front-end, so I also keep my front-end knowledge up to date. It’s still helpful to help and talk with the frontend teams, and it’s always useful when I want some quick UI to simplify my workflow. I even did a repo where I put everything I know about how to write a good website. Don’t hesitate to PR ! I may be a little rusty over some things.

As I was working mostly on the frontend side (and some backend but nothing fancy), I was bored and wanted to try something new. Reading all those news and stuff about bigdata made me wanting to go there and see what was the deal with that.

I’ve challenged myself, decided to resign of my current job (5 years there), and found a job in an amazing company in Paris, Powerspace. The boss was kind enough to let me in, despite the fact I don’t had a lot of experiences with their current stack. He accepted the challenge, because he knew I was able to learn quickly and that I definitely wanted to learn, making me a good resource.

Being good is being a good learner.

There is also this version: being an engineer (what most of us are considered today in IT) is being confortable with the ongoing changes and evolutions.

I was, and he trusted me.

Anyway, it’s been a year, and now, I have a way better knowledge of the available backend technologies we can use, how to use and link them, understand why they exist. I also like working on the architecture side thanks to the bigdata constraints. Then finally, I’m caring a lot more about the monitoring and alerting piece, those are truly something to be taken into account to know what’s going on each second in your company heart.

Working in the IT domain, you probably know Continuous Delivery, Continuous Integration, so you must be aware of the Continuous Learning. Here are the resources I’m using to do that.

Twitter

Of course.

It’s rare that tech people don’t have a twitter. This is often where the things happen at first. It’s an incredible source of informations. The characters limit is like a continuous TLDR feed, because we don’t have time to read everything. We just want to be aware it exists, and refer to it when we need to.

We don’t know if Twitter is still going to be there in 10 years. I sometimes wonder how we were doing before that. :-)

Bookmarks

I have a special folder “TO READ” where I bookmark for later.

Months ago, I was able to be stable around 10.
Right now, I have like >100 links in this folder. Always read some on evenings.
Once, I tried pocket, but, for some reason, I didn’t like it and stayed with my manual method. There is also delicious but I never gave a try.

I admit it sucks for a few reasons:

No tag system (i’m using Chrome)

You can resize the bookmark popup (Windows)

No search in this popup

The whole experience is not great

Well, to summarize : because the bookmarks system is just too damn basic. But it does the job.

There is a nice platform to git clone and install ourself, to save the bookmarks and shares them automatically, it’s called Shaarli.

Newsletters

Easy to subscribe and forget. The emails are where I got a tons of links. I don’t spend the same time on every one of them, it would be damn too time consuming!

InfoQ: mass amount of talks with slides, can find great stuff from confs and around the world

Slideshare: sometimes, you just want to get details from slides because it’s more succinct (a video is a sequence of images right?)

Company

Professionally, Slack is very useful (even if too noisy sometimes..). Having a dedicated channel where people shares links about technology is a big plus. People likes to share, but don’t share too much, otherwise it’s lost.

Egghead: nice free videos with react, redux, cycle and so on. But not all are free.

Books

I have a dozen of pdfs on my desktop about interesting stuff (scala, tdd, hadoop, elastic, akka, architecture, some softwares…). I’ve finished some, and started some others. Barely have time to read them.

I’m often reading a physical book aside. It’s more practical to stay focus, and “hopefully”, I have 2 hours of transit every day, so I try to not lose them.

Why all this ?

Because I love to know lots of things. It allows you to see the greater picture of things, and know what it’s possible, where we are in the domain.

Take the Javascript arrays.

Cool we can iterate using “for”.

Then do some functional programming and use “filter”, “.map”, “.concat”.

Then in more powerful languages such as Scala or Haskell, use some monads.

Then use scalaz to use category theory over your data (and lose your mind).

Use Kafka to act as an “array container” but partitioned and distributed.

Do some streaming processing to act as your “filter”, “map” or “reduce”.

Use Spark, Flink etc. to process streams in a distributed fashion, forever.

Be reactive, and use ReactiveX (rxjs/java/scala) to have a powerful API and deal with Observables.

Use an event-driven architecture.

Don’t forget backpressure.

I see that as the whole same thing at different levels of abstraction/simplicity.

Why it’s hard ?

Because my memory sucks and I’m sure I forget 90% of what I’m reading

Because I don’t have a lot of free time to read and learn all that. I’m trying, sleeping is overrated

Because most of things won’t be useful for me.

Because I could create stuff and write some code to make money instead

Why it’s useful to learn all that?

Because in about a year of that, I learned SO MANY things in a lot of domains and got better overall.

But we can’t learn everything

Despite all that, there are some domains I volontary don’t take a look, because I know it’s just too much or I know it’s just something I will never work with.

For instance:

mobile apps (except React-Native now, from far)

ops softwares (I mean chef, puppet, ansible, this kind): No problem to use them, but I don’t want to learn the internals, I just don’t care. I just love Docker. :-)

datascientism: I LOVE to work with datascientists, they are awesome people. They know the data better than anyone else in the company. But the ML piece, the Deep Learning piece is a huge world. I not the basics but it stops here. But we all should definitely learn a lot in this domain, this is the present, and it’s our future. Even Google wants to be a “ML Company First”, betting on the right horse.

design: I LOVE to work with designers, they are awesome people. Without them, we’ll still be working on green on black screens. I LOVE watch and study the work they are doing. I LOVE to criticize front-end stuff, even I have don’t have the answer.

networking: I know the basics and how to debug a bit chtefi/curated-system-tools but it’s a very complex world. I suck at understanding company’s architectures and acronyms, how it works etc.

The cloud: barely played with AWS, I want to know more, but waiting to have a use-case :)

Each job needs specialized people after all.

Blog

As you can see, I also “maintain” this blog.

I mostly pick a topic or a framework I want to learn and share my findings. (using a lot of gists too)
It’s useful because you try to not say stupid things and must verify your affirmations. See this another blog to understand why you should write blogs.

I have a tons of drafts and titles, but it’s very hard to find time to write the articles the way you want. I often takes several evenings doing only that to make an article good enough for me. I still must improve on that part.

Beside staying up-to-date

Don’t forget your wife, your kids, and your family.

Don’t forget yourself. Go to gym. Don’t try to be too geeky.

Feel free to comment about what YOU are using to stay up-to-date. I’m eagerly impatient to know.

You are a Java or Scala programmer. You are logging stuff with different levels of severity. And you probably already used slf4j even without noticing.

This post is global overview of its ecosystem, why it exists and how does it work. It’s not because you’re using something everyday that you know the details right?

Why does slf4j even exist?

Why do we need something complicated like a logging framework to do something simple as put a message on stdout? Because not everybody wants to use only stdout, and because of dependencies that have their own logging logic too.

slf4j needs love

slf4j is an API that exposes logging methods (logger.info, logger.error and so on). It’s just a facade, an abstraction, an interface. By itself, it can’t log anything. It needs an implementation, a binding, something that truly logs the message somewhere. slf4j is just the entry point, it needs an exit.

slf4j breathes in logs

But it can also serve as a exit for other logging systems. This is thanks to the logging adapters/bridges, that redirect others logging frameworks to slf4j. Hence, you can make all your application logs to go through the same pipe even if the origin logging system is different.

slf4j is magic

The magic in that ? You can do all that and update the implementation without altering the existing code.

We are going to see several logging implementations slf4j can be bound to.

I’m going to use Scala code because it’s more concise, but that’s exactly the same in Java.

Simple logging using JUL

JUL stands for java.util.logging. This is a package that exists since the JDK1.4 (JSR 47). It’s quite simple to use and does the job:

The default format is horrible but we can see our logs. You’ll notice we have the INFO and SEVERE twice but not the FINER. It’s because, by default, there is already a console handler logging all INFO minimum.

It’s also configurable through a properties file often named “logging.properties”.

For instance, on OSX, you can find the JVM global JUL configuration here (that contains the default console handler we just talked about):

Be careful, specifying a configuration file is not used as an override of the default! If you forget something (especially handlers=), you might not see any logging.

Note that we used the handler java.util.logging.ConsoleHandler but there is also available a FileHandler (if unconfigured, it logs into $HOME/java0.log).

LogManagers

All the Loggers created in the application are managed by a LogManager.

By default, there is a default instance created on startup. It’s possible to give another one, by specifying the property java.util.logging.manager.

It’s often used along with log4j that implements a custom LogManager (available in the package org.apache.logging.log4j:log4j-jul):

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

This way, any manager can have a hand on any Logger created in the application.

It can change their behavior and where do they read their configuration for instance. This is what we call a Logging Adapter or a bridge: you can log using JUL in the code and use log4j features to manipulate and save the logs. We’ll go into more details later in this post.

The name “slf4j-jdk14″ is because JUL package appeared in the JDK1.4 as we said. Strange name to pick but well.

Output:

INFO: message from slf4j! [Thu Aug 18 23:45:15 CEST 2016]

The code is the same as previously, we just changed the implementation. Notice the output is different than the SimpleLogger’s.

This logger is actually an instance of JDK14LoggerAdapter. It’s using the style we defined at the beginning, in logging.properties, used by JUL, remember ?.

Note that you don’t have the full control on the Logger via the API as we had when using directly java.util.logging.Logger which exposes more methods. We just have access to the slf4j’s ones. This is why the configuration files comes in handy.

Multiple implementations

If we have multiple implementations available, slf4j will have to pick between them, and it will leave you a small warning about that.

As we said org.slf4j.impl.StaticLoggerBinder is the class slf4j-api is looking for in the classpath to get an implementation. This is the class that must exist in a slf4j implementation jar.

This message is just a warning, the logging will work. But slf4j will simply pick one available logging implementation and deal with it. But it’s a bad smell that should be fixed, because maybe it won’t pick the one you want.

It often happens when pom.xml or build.sbt imports dependencies that themselves depends on one of the slf4j implementation.

They have to be excluded and your own program should import a slf4j implementation itself. If you don’t, you could run in a no-logging issue.

A real case causing logs loss

If we restart our program, it’s getting more verbose and we’re getting a surprise:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [.../org.slf4j/slf4j-log4j12/jars/slf4j-log4j12–1.7.5.jar!...]
SLF4J: Found binding in [.../org.slf4j/slf4j-jdk14/jars/slf4j-jdk14–1.7.21.jar!...]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (My App).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

We can see some log4j warnings that we never imported, and we don’t even see our own message! Where did it go?

JCL/ACL

Jakarta is an old retired Apache project, basically, it’s known as ACL now, Apache Commons Logging. It’s not maintained anymore (since 2014), but we can find it in old projects.

It serves the same purpose as slf4j, meaning it’s an abstraction over different logging frameworks such as log4j or JUL.

slf4j’s getLogger() will return a JCLLoggerAdapter that will look for a specific “Log” implementation set by the System variable “org.apache.commons.logging.Log”.

If not set, it will try to fallback on any implementations it can find in the classpath (log4j, JUL..).

New projects should forget about it. Only, if they depends on an old project that depends on JCL, then it should be considered to add a bridge to redirect JCL logs to the implementation of the project.

log4j

log4j is a widely-used logging framework. v1.x has been refactored and improved a lot to create the v2.x called log4j2.

Again, it can be used as an abstraction over a logging implementation, but it can be used as an implementation as well.

TLDR

Performance

Some applications can generate a tremendous amount of logs. Some precautions should be taken care of:

async logging should always be preferred (another thread doing the logging, not the caller’s). This is often available in the logging configuration itself ()

you should not add level guards (if (logger.isDebugEnabled)…) before logging, that brings us to the next point:

do not concat strings yourself in the message: use the placeholder syntax such as log.info(“the values are {} and {}”, item, item2). The .toString() won’t be computed if it’s not needed (it can be cpu intensive, but basically, it’s just useless to call it if the log level is not enough).

In Scala, you generally use https://github.com/Log4s/log4s to avoid this and just use classic string interpolation. It’s based on macros and will automatically add guard conditions.

Last notes

slf4j is useful combined to a powerful implementation such as log4j2 or logback.

But be careful when the application is managed by another application, like supervisor, because they can handle the logging themselves too like file rolling, or logstash to somewhere. Often, keeping the logging configuration simple (stdout) is enough.

A lot of frameworks have traits or abstract classes or globals to provide the logging directly :

Most of developers will never use it because it’s not often necessary to access the system resources, the windows, the volumes etc. That really depends of your business.

Sometimes, you want to use a library that’s not written in Java but in C. It’s very performant and battle-tested, you need to create a bridge. This is where JNI and JNA comes into play.

About resources, Java provides already some high-level API for some system aspects (memory, disks), such as:

Runtime.getRuntime().maxMemory()

Runtime.getRuntime().availableProcessors()

File.listRoots()(0).getFreeSpace()

But it’s pretty limited. Behind the scene, they are declared as native and rely on JNI.

You can use some projects that offers more options, such as oshi (Operating System & Hardware Information). It makes all possible information on the OS and hardware of the machine available (all memory and cpu metrics, network, battery, usb, sensors..).

It’s not using JNI: it’s using JNA!JNA is JNI’s cousin: created to be simpler to use, and to write only Java code. (Scala in our case :) Note that there is a slight call overhead compared to JNI because of the dynamic bindings.

JNA

Basically, it dynamically links the functions of the native library to some functions declared in a Java/Scala interface/trait. Nothing more.

The difficulty comes with the signature of the functions you want to “import”.
You can easily find their native signatures (Google is our friend), but it’s not always obvious to find how to translate them using the Java/Scala types.

Hopefully, the documentation of JNA is pretty good to understand the subtle cases : Using the library, FAQ.

Let’s review how to use it using Scala and SBT (instead of Java).

How to use it

“jna-platform” is optional. It contains a lot of already written interfaces to access some standard libraries on several systems (Windows (kernel32, user32, COM..), Linux (X11), Mac). If you plan to use any system library, check out this package first.

I generally use openssl only to create a pair of private/public keys to be used with ssh.

But when I need to use it for some reasons, I’m always wondering what to do. I google some tutorials then copy/paste commands. But I never understood what’s the deal with all the files .key, .pem, .csr, .crt?! Then, when I succeed to do my thing, I move on. I never really tried to understand the flow.

I’m not an expert in openssl. I just want to demystify some of its features. How it is used to generate keys and certificates? What we can do beyond that? (random number generator, manually encrypt/decrypt files)

I’ll use a classic example: generate a self-signed certificate. Several commands will be used, several files will be generated. I’ll add the certificate into nginx to make it work. Finally, I will use some other commands unrelated to certificates.

Why a certificate?

It has two purposes :

ensure that you are on the website it claims to be

The browsers (or the operating systems) have a set of Certificate Authority root certificates installed. They use them to verify the certificates of the websites.

encrypt the data between you and the website server

The certificates contains a public key that the browser will use to encrypt the data to send. The server will be able to decrypt them using the associated private key stored on its disk.

Why a self-signed certificate?

Those kind of certificates were more useful before, when the HTTPS certificates were not free. To work with HTTPS in staging or development environments, it was the way to go.

Now, we have Let’s Encrypt and https://gethttpsforfree.com/ to get free HTTPS certificates truly signed by a Certificate Authority. But it’s still more complex to grab than simply generate a self-signed certificates.

We generate a key of 2048 bits. The more the safer. The more, the slower to generate. The process has to generate 2 primes numbers and do some security checks (represented by the “.” and “+”). It will also use some random signals to always get a different result.

We encrypt it with the Triple DES cipher using a password. Here, it is provided directly into the command line. It could also be passed from a file (“file:pwd.txt”). Any program trying to use the key will first need the password. It’s a secondary layer of security in case someone has access to the file.

Note that for automation, you could get into troubles. If a program wants you to type a password — because you are using an encrypted private key — such as nginx or any openssl commands, you could need a human interaction.

The output is by default the PEM format (it’s just the base64 of the DER format which is binary). PEM stands for “Privacy-Enhanced Mail”.https://tools.ietf.org/html/rfc1421. It’s a very old format.

By default, openssl outputs the PEM format. It’s a plain ASCII file with some specific header and footer, and a big string in-between. It’s useful to send them through email (not the private keys of course!) along some text, or even to send a message encrypted (PGP).

Bonus: create a key from another key

“rsa” can also be used to convert any key to any other key format. For instance, we could generate another private key based on our first:

$ openssl rsa -passin pass:ThEpWd -in out.key -out out-next.key

I’m not sure why . Is that more secured? I already stumbled upon this technic to create a certificate, instead of using a password protected key generated by genrsa. Please, let me know!

Step 2: create a certificate request

Now that we have our private key, we are going to create a certificate request. Both the key and the certificate request are needed to create a certificate.

It’s necessary to request one because you are not supposed to be the one who signs the certificate. It is the role of the Certificate Authorities.

We use “openssl req” to generate a .csr (Certificate Signing Request).

$ openssl req -new -key out.key -out out.csr -sha256
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
-----
Country Name (2 letter code) [AU]:FR
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
...

We provide the private key to use (out.key)

We fulfill several metadata about the company (if self-signed, we don’t care a lot). They will be identifiable in the certificate.

We use a SHA-256 hash. If unspecified, it’s SHA-1 that will be used. SHA-1 creates a hash of 160 bits, and is deprecated due to some weaknesses. For instance, Chrome clearly displays a warning if the certificate is still using SHA-1. The new “standard” is SHA-256: it creates a hash of 256 bits.

A better nginx configuration

nginx’s ssl configuration is a very important topic. Except when dealing with self-signed certificates because they should not be exposed to the Internet. But otherwise, you should definitely follow the advises of the following gist.

So, we have data coming from one of our service, which is the source of a flume agent. Now, we want to be able to query them in a scalable fashion without using hbase or any other database, to be lean.

One way is to use HDFS as a database (Flume has a HDFS sink that handle partitioning), create a Hive table on top to query its content, and because we want something performant and fast, to actually use Impala to query the data using a Apache Parquet format.

Here’s a little diagram of the stack :

Apache Oozie is used to regularly export the HDFS content to a parquet format.

We are storing our data into HDFS in an Apache Avro format (Snappy compressed) because of all its advantages and because we are already using it everywhere.

Let’s review the stack one piece at a time, starting with the Flume configuration.

Flume

Let’s say we have a avro source where our custom service is sending events :

Partition

Because of our volume of data, we want to partition them per year-month-day then by hour. The “ymd=” and “h=” in the path are important, it represents the “column” name of the time dimension that will be queryable later.

Note that your Flume source must have a “timestamp” in the header for Flume to know what is the time dimension. If you don’t have this info, you can simply add hdfs.useLocalTimeStamp = true to use the ingestion time, but it’s discouraged because that means you don’t have any timestamp column in your data, and you’re going to get stuck later when doing some Impala partitioning.

Roll interval

We decide to roll a new file every 5min, and not based on size nor count (they have to be explicitely set to 0 because they have another default value).

By default, Flume buffers into a .tmp file we can’t rely on, and because we want to access the fresh data quickly, 5min is a good start. This is going to generate a bunch of file (144 per day), but we don’t care because we are going to export them later into a hourly parquet format and clean up the old HDFS content.

Moreover, if Flume suddenly dies, you are going to lose maximum 5min of data, instead of the whole buffer. Stopping properly Flume flushes the buffer hopefully.

File name

The inUsePrefix to “.” is to hide the working files to Hive during a query (it ignores the aka hidden files). If you don’t, some MapReduces can fail because at first, Hive saw a file Flume was buffering into (a .tmp), then the time to execute the MR, it was not there anymore (because of some flush), and kaboom, the MR will fail :

File type

By default, the filetype is SequenceFile. We don’t want that, because that makes Flume convert the output stream to a SequenceFile that Hive will not be able to read because the avro schema won’t be inside. Setting it to DataStream let the data sent unaltered.

Multiple Flume ?

Be careful if multiple Flume are writing to the same location, the buffer is not shareable and you could have name conflicts. By default, Flume named the file with a timestamp (in milliseconds). That’s fine most of the time but you never know if they are going to collide one day. Consider having two different configurations with a different prefix or suffix to the filename.

Performance consideration

Don’t forget to monitor your Flume when adding a HDFS sink. The overhead is noticeable; there are a lot more I/O threads (by default it’s 10 but I noticed way more threads with VisualVM), and the CPU slightly increases.

Our HDFS is in place and get streamed new data. Let’s now configure Hive to create a table on top of the files.

Hive

Avro is hopefully standard in Hadoop and Hive has everything to read avro files to create a table.

AvroSerDe

The magic happens when using the AvroSerDe (Avro Serializer Deserializer). It is used to read avro files to create a table, and vice-versa, to create avro files from a table (with some INSERT). It also detects and uncompresses the files when compressed with Snappy.

Under the hood, it’s simply using DataFile[Writer|Reader]<GenericRecord> code to read/write avro content.

Create the table

We create a Hive external table mapped onto our .avro files using AvroSerDe and specifying a avro schema to read the data (it will probably be the same that the schema used to write the data BUT it could be different, such as only a slice):

Notify Hive of the new partitions

Our data are in HDFS, our Hive table is mapped onto it, good. But Hive need to discover the data now, it’s not automatic because we are short-circuiting it (we are inserting data without any Hive INSERT).

msck repair table to the rescue. It simply looks in the folder to discover new directories and add them to the metadata.

Because we are not crazy enough to query through Hive, let’s focus on querying the data through Impala to get blazing fast responses.

Impala

We can use Impala to query the avro table, but for performance reason, we are going to export it into a Parquet table afterwards. This step is mostly to be aware that it’s possible.

Query the Hive avro table

If right now, in Impala, you do:

> show tables;

You won’t see the Hive table yet. You need to make Impala aware it’s there:

> INVALIDATE METADATA avro_events;

When it’s done, you’ll be able to query it, with frozen data still. It’s the same problem that with Hive.

For Impala to see the latest data of the existing partitions, we can use REFRESH :

> REFRESH avro_events;

But that won’t discover the new partitions still (ie: if Flume just created a new hour partition).

We have to use the equivalent of Hive’s msck repair table, but for Impala:

> ALTER TABLE avro_events RECOVER PARTITIONS;

This RECOVER PARTITIONS will do what REFRESH does (see the latest data of the existing partitions), but will discover the new ones too. I don’t know the impact and process time on big tables with tons of partitions.

Query a Parquet table

Because we want to be taken seriously, we want to store our data into Parquet files to get fast queries. Parquet stores the data in columns rather in rows, supporting über-fast filtering because it doesn’t need to parse every rows.

First, we need to create a partitioned Impala table stored in Parquet :

It doesn’t have to follow the same partitioning than the Hive table but for the sake of clarity, it does.

It’s empty, we are going to fill it with the avro table. To do that, we are going to base our partition logic on a “timestamp” column you should have in your data. We can’t retrieve the partition values from the Hive table avro folders name because it’s not queryable.

-- We ensure we're viewing the latest partitions of the Hive table
-- where Flume is sinking its content
ALTER TABLE avro_events RECOVER PARTITIONS;
REFRESH avro_events;
-- We insert the data overwriting the partitions we are going to
-- write into, to be sure we don't append stuff to existing (that
-- would do duplicates and wrong stats!)
INSERT OVERWRITE avro_events_as_parquet
PARTITION(ymd, h)
SELECT type, ...,
FROM_UNIXTIME(FLOOR(`timestamp`/1000), 'yyyy-MM-dd') AS ymd,
CAST(FROM_UNIXTIME(FLOOR(`timestamp`/1000), 'HH') AS INT) AS h
FROM avro_events
[ WHERE timestamp < $min AND timestamp > $max ];
-- We compute the stats of the new partitions
COMPUTE INCREMENTAL STATS avro_events_as_parquet;

We specify the partition values doing some transformation on our “timestamp” column (note: FROM_UNIXTIME uses the current locale).

The WHERE is not necessary the first time, you just want to load everything. Later, when you have a scheduled job every hour for instance, you want to filter which partition you want to write, thus the WHERE.

Now, because Flume is streaming, and because you want to query Impala without doing all the manual updates yourself, you’re planning a recurring Oozie job to take care of them.

Oozie

It’s quite straight-forward, it’s just an encapsulation of the scripts we just used.

First, we define a coordinator running every hour that will write the previous hour partition. (the coordinator could trigger the workflow every 5mins to have less lag in Impala if that’s necessary; the same Parquet partition would just be overwritten 12 times per hour with more and more data each time)

We take care of adding a small lag before the exact new hour, 12:00, and instead run the workflow at 12:05, to be sure Flume has flushed its data (the 5min is not random, it’s the same value as Flume rollInterval).

That means we can define a property in the coordinator.xml like that :

And the script uses it to create the WHERE condition we just talked about in the Impala part using some shell script transformations (because with what we did, we just have a plain TZ date, but we need 2 timestamps at least (min and max), or any other time value we could use from our data if we have them, such as “day”, “hour”).

I assume you know Oozie and what you do to not provide the full scripts in there.

Improvements

I think the stack could be simplified using a serializer to export into Parquet directly to HDFS, but that would still create a bunch of tiny parquet files (because we want to flush a lot) so we would still need to merge them automatically at the end. What do you think?

Conclusion

I hope I made myself clear and that the process makes sense.

Don’t forget we did that just because we didn’t want to use any database, but only HDFS to store our data, for some reasons. We wanted to be able to query quickly through Impala to have “almost-realtime” data (with a coordinator every 5min for instance that would do it).

Another solution would be to sink Flume into HBase then query over it, or create a Hive/Impala table on top.

In your company network, or home network, you have access to several computers, and you probably want to interact with some of them, start some tasks, check some logs, reboot them and so on.

Of course, you are already using ssh to do so.

If you are still typing a username and password each time you log in somewhere, stop it.

Passwords are obsolete

Stop losing your time, looking at your excel to get the password or such, and use the power of the public-key cryptography. Smart people did that for a reason.

The principle is simple :

you have a (private) key in a file on your computer

the other computer has your (public) key in a file

those keys are strongly related

When you are going to connect to the other computer, both ssh and sshd (the daemon waiting for connections) will talk to each other and will check if they can make some kind of association using the keys each of them have. If they succeed, it means your are the one with the good private key (only your computer stores it, nothing else), it knows you, you are how you claim to be, and therefore you have access.

Can be used with GitHub, BitBucket, anything

Note that this approach does not only apply to the networks where you ssh, but to the whole Internet such as when you are using GitHub, BitBucket, or anything that has a login and password.

You are not doing pure ssh command-line into them but you are pushing data into them. Data that need to be authenticated. That’s why they offer the possibility to add some public keys for your account (through their UI) that will be in pair with the private key you have stored on your local computer.

Time to do some hacking

Here is my key

First, we need to generate those keys on our local computer :

# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.

By default, we won’t put any passphrase, even if that can be considered as a bad practice.

As soon as you save the file, you’ll be able to connect from your local computer on the server, no question asked.

root@local:~# ssh john@server
john@server:~$

One-way only

This config is a one-way only. You can’t connect from the server to the local. If you’d like to do so, you must perform the same actions but from the other side (generate keys on the server and add the public key to the local computer).

But, do not considerer this. A user on a server should never be able to connect to another computer. You should always get exit to your local computer, then connect to the other server. Do not connect nodes between them, it can introduce security issues.

One computer to rule them all

Once you put your public key somewhere, never generate your keys set on your computer. Otherwise you’ll lose the ability to connect on the servers you set up, because the public key won’t “match” the private key. ssh-keygen will warn you if that’s the case, but better not to try. ;-)

.ssh/config

Even the username can be optional

Another great feature is to avoid to type the username you want to login with on the server.

Instead of :

root@local:~# ssh john@staging

It would be nice to do :

root@local:~# ssh staging

And have automatically “john” as user (because this is the one set up with the keys).

To do so, edit the file .ssh/config and add something like :

Host staging
HostName staging.host.lan
User john

You can add as many hosts as you want in this file. It’s simply some kind of mapping between a name you give “Host” and User@HostName.

Beyond hosts

This file is not only useful to declare Host mappings. It can contain way more configuration bits, such as :

Host *
ServerAliveInterval 60

This is to avoid the famous “broken pipe” you’ll get if you are inactive in a ssh session. That will send keep alive packets to be sure the connection stays up.

Beyond ssh

The keys are not only used by ssh.

scp is also compatible (to copy files from/to another host):

$ scp -r staging:/tmp/logs .

It follows the same rule as ssh : using .ssh/config and the set of keys on both sides.

But it’s not enough

Note that for further security, your keys should have a passphrase (that we set to blank in our example), and that you should use ssh-agent to, again, avoid to type it when needed.

Back to the Java world, I’ve made my mind and knew that I didn’t know enough, that I wasn’t enough confident.

Therefore I’ve looked around myself at some “simple” aspects of Java (CLI, GC, tools) to consolidate my knowledge, and made this post to give a global overview of what the Java CLI has to offer, how to configure the memory heap, what are the GC principles and most useful options, and introduce some tools to debug and profile the JVM.

I assume you already know Java and know what do the Garbage Collector with the Young Generation and the Old Generation. Hopefully, this post will learn you some new tricks.

-XX:+PrintGCDetails

It’s more interesting, you still see the heap size changes, but you also see the young generation PSYoungGen, the old generation ParOldGen, and the Metaspace changes (because I was running with the Parallel GC, it’s different according to which GC is used).

-XX:+PrintGCApplicationStoppedTime

It’s useful to know how much time your application didn’t do anything, because the World was Stopped.
You really want to minimize those times.

Total time for which application threads were stopped: 0.0000492 seconds,
Stopping threads took: 0.0000179 seconds
Total time for which application threads were stopped: 0.0033140 seconds,
Stopping threads took: 0.0000130 seconds
Total time for which application threads were stopped: 0.0004002 seconds,
Stopping threads took: 0.0000161 seconds

-XX:+PrintAdaptiveSizePolicy

This displays some metrics about survivals and promotions that the JVM Ergonomics is using to tune and optimize the GC behavior (by modifying space sizes).

-XX:-UseAdaptiveSizePolicy

The JVM Ergonomics tries to enhance the latency and the throughput of your application by tuning the GC behavior such as modifying the space sizes.
You can disable this behavior if you know you don’t need it.

And you can still have the details about the survivors and promotions if combined with the previous flag.

Memory tuning

Heap size

Heap = Young Generation (Eden + Survivors) + Old Generation (Tenured)

This is the big part that you can impact for the better or for the worse.
If you think you need to change it, you need to be sure it’s necessary, know the existing GC cycles, know that you have reached the limit (or not).
Or you can just give a try and check the behavior, latency, and throughput of your application. ;-)

-Xms / -XX:InitialHeapSize : initial heap size

-Xmx / -XX:MaxHeapSize : maximum heap size

The MaxHeapSize influences the InitialHeapSize up until 256m.

if MaxHeapSize=2m then InitialHeapSize=2m (max)
if MaxHeapSize=256m then InitialHeapSize=256m (max)
if MaxHeapSize=512m then InitialHeapSize=256m (half)

Default size

As we already said, the default MaxHeapSize is 1/4 of the machine RAM, and the InitialHeapSize is 1/64.

For instance, on my machine, I have 16GB of RAM, that gives :

InitialHeapSize = 268435456 = 256m
MaxHeapSize = 4290772992 = 4092m

Be careful with big numbers and PrintFlagsFinal, it won’t display them properly if greater than 4094m, because it displays them as uint, thus the limit is 4,294,967,295.

Young generation (in heap)

This part of the heap is where all variables start their lifecycle. They are born here, and they will likely evolve to survivors, then ends up in the old generation; if alive long enough.

-XX:NewSize : young generation initial size

-XX:MaxNewSize : young generation maximum size

-Xmn : shortcut for both

The MaxHeapSize and InitialHeapSize influences the MaxNewSize and NewSize.

if MaxHeapSize=1g (InitialHeapSize=256m) then MaxNewSize=341m and NewSize=85m
if MaxHeapSize=4g (InitialHeapSize=256m) then MaxNewSize=1365m (x4) and NewSize=85m
if MaxHeapSize=4g and InitialHeapSize=1g then MaxNewSize=1365m and NewSize=341m (x4)

By default, the ratio is 3/1 between MaxHeap/MaxNewSize and InitialHeapSize/NewSize.

Default size

We just saw that NewSize/MaxNewSize are linked to InitialHeapSize/MaxHeapSize.

The default of MaxHeapSize is 1/4 of the machine RAM, and the InitialHeapSize is 1/64.
Therefore, the default of MaxNewSize is (1/4)/3 of the RAM, and NewSize is (1/4)/3 of InitialHeapSize.

Minimum size

You can’t have MaxNewSize < NewSize :
[code]
Java HotSpot(TM) 64-Bit Server VM warning:
NewSize (1536k) is greater than the MaxNewSize (1024k).
A new max generation size of 1536k will be used.
[/code]
The 1536k will be equally separated between the Eden Space, the from Survivor space, and the to Survivor space. (512k each)

You can’t neither have the MaxNewSize >= HeapSize (young gen size can’t be greater than the total heap size) :

Not enough space ?

Even if you have a MaxNewSize of 1m and your program tries to allocate 1g bytes, it will work if you have a big enough heap size. The allocation will just directly go into the old generation space.

Thread Stack (off heap)

Each and every threads in the program will allocate this size for their stack.

This is where they will store the function parameters values they are currently executing (and they are removed when the function exits). The deeper calls you do, the deeper you’re going into the stack. (FILO)

Recursive calls can go very deeply because of their intrinsic natures. This is where you have to be careful in your logic and maybe increase the default ThreadStackSize.

Minimum size

You can’t set a too small size (< 5m) otherwise you'll end up with those errors :
[code]
Error occurred during initialization of VM
OutOfMemoryError: Metaspace
-
java.lang.OutOfMemoryError: Metaspace
<<no stack trace available>>
-
Exception in thread "main" java.lang.OutOfMemoryError: Metaspace
[/code]
But it's a good idea to set a max size, to be sure the JVM will never take an “unlimited” memory (and breaks the other apps on the server) in case of some bugs.

Garbage Collectors

Each GC deals differently with the Young Generation Space (new objects) and the Old Generation Space (variables referenced since a while), because the Young is a very fast-paced space, not the old one.

The Young Generation space should never be too big, 2GB seems like a good limit. Otherwise, algorithms could not be as performant when processing it.

GC (Allocation Failure) : a minor GC (Young generation) was done because space was not available

Full GC (Ergonomics) : the JVM decided to do a Full GC (Young + Old generations) because of some thresholds

But you can force-disable it by doing -XX:-UseOldParallelGC : you’ll end up using the PSOldGen old generation collector. It’s not parallel anymore but serial (as the SerialGC). You should probably not used it.

You can control how many threads the parallel phases are using -XX:ParallelGCThreads=N.
It is by default the number of cores the computer has. (must be at least 1)

-XX:+UseParNewGC -XX:+UseConcMarkSweepGC

The Concurrent Mark and Sweep GC.

It’s an evolution of the Parallel GC. This time, it’s not a Stop The World algo everywhere.
It can collect the old generation concurrently while the application is still running, meaning you should have a better latency.

ParNewGC, while collecting the young generation, will send some stats to the ConcMarkSweepGC, that will estimate if it should run a GC (according to the trend of the promotion rates in the young generation). This is why the CMS works with this one and not the classic parallel UseParallelGC.

Moreover, while being mostly concurrent, it has just of few phases where it still must Stop The World but they are very short period, contrary to the previous algorithms.

The Stop The World events happen during the CMS Initial Mark and CMS Final Remark.

You can have notice that the Old Generation is not compacted at the end, meaning it can still exists holes in the memory that is not used because it’s too small.

If Java can’t find any more memory because of that, it will trigger a Full GC by calling the Parallel GC (well, a GC that does the compaction but Stop The World). Moreover, this can happen also when a CMS compaction is currently in progress (concurrently), then suddenly, a lot of survivors are promoted to the old generation and boom, no more space.
This is why the CMS must be triggered way before the space is filled.

It is the role of the flag +XX:CMSInitiatingOccupancyFraction, by default, it’s around 92% according to Oracle.

Moreover, you can control how many threads to use for the concurrent part using -XX:ConcGCThreads=N. (measure before change)

-XX:+UseG1GC

The latest Java HotSpot VM GC.

It handles the space differently compared to its predecessors (being closer to the ConcMarkSweepGC).

There is no more Young and Old regions only. There are a bunch of regions of different sizes (certains will be automatically resized on the fly by the GC to enhance performance), each of them deal with only one type of generation : an Eden, a Survivor, or a Old. (and some with Humongous objects : they are so big they span on several regions). It targets around 2000 regions, each of them between 1MB and 32MB.

It is oriented for quite big heaps (> 4GB), and for small latency environments : you specify how much pause time (max) you desire for GCs (default: 0.5s).

It is mostly concurrent (does not affect too much the latency of the application) and parallel (for the Stop the World phases), but is a bit more computing intensive (compute stats to enhance behavior, predict what to clean, to reach the desired pause time).

G1 Evacuation Pause : copy alive objects (Eden or Survivors) to another region(s) compacting them and promoting them if old enough (to an Old Generation region). It’s a Stop The World process

concurrent-* : marks and scan alive objects and do some cleaning while the application is still running

(mixed) : both young and old generations copied (“evacuated”) elsewhere at the same time

Profiling

ASCII profiling

If you’re an hardcore player, you can use the Java agent hpref to retrieve a human-readable heap dump with the Java profile of your application (when it ends).
It’s bundled by default in the HotSpot JVM.

$ java -agentlib:hprof=heap=sites com.company.MyApp

That will generate a file java.hprof.txt where you can easily find out what are the most expensive function calls :

JMX

A nice and easy way to get in-depth of your live program (local or distant), is to enable JMX when starting the application. JMX can be secured, but if you don’t want to be bother with that, start the JVM with those settings :

-Dcom.sun.management.jmxremote.port=5432

-Dcom.sun.management.jmxremote.authenticate=false

-Dcom.sun.management.jmxremote.ssl=false

If will expose its internals through the protocol JMX on port 5432.

You need a program to read from it. Fortunately, there is one installed by default : jvisualvm.
Just start your Java program somewhere, then start jvisualvm.

If on the same computer, it will automatically find it.
Install the plugin VisualGC if you don’t have it, to monitor the GC in details it’s a win.
You can even do the CPU and Memory profiling live.