Our applications often interact with external systems. In many cases, we need a persistent connection to one or more of these external services. For example, if your application makes continuous use of a database, you’ll likely want to stay connected to such database so that you can avoid spending time and resources connecting and disconnecting each time you perform a request. With Erlang and Elixir, the natural abstraction to maintain a persistent connection is a process. In this post, we’ll have a look at how we can take advantage of the gen_statem behaviour to write state machine processes that act as persistent connections to external systems.

Property-based testing is a common tool to improve testing by testing properties of a piece of software over many values drawn at random from a large space of valid values. This methodology was first introduced in the paper QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs, which describes the basic idea and shows a possible implementation in Haskell. Since then, many tools to aid in property based testing appeared for many programming languages: as of the time of writing, there’s libraries for Haskell, Erlang, Clojure, Python, Scala, and many others. A few days ago I released the first version of StreamData, a property testing (and data generation) library for Elixir (that is a candidate for inclusion in Elixir itself in the future). This post is not an introduction to property-based testing nor a tutorial on how to use StreamData: what I want to do is dig into the mechanics of how StreamData works, its design, and how it compares to some of the other property-based testing libraries mentioned above.

Erlang supports a way to implement functions in C and use them transparently from Erlang. These functions are called NIFs (native implemented functions). There are two scenarios where NIFs can turn out to be the perfect solution: when you need raw computing speed and when you need to interface to existing C bindings from Erlang. In this article, we’re going to take a look at both use cases.

Macros are a very common way to do metaprogramming in Elixir. There are many
resources that explain what macros are and how to use them (much better than I
could): there’s the Macro chapter from the
“Getting Started” guide on Elixir’s website, an awesome
series of articles by Saša Jurić, and even a
book (Metaprogramming Elixir) by Chris McCord. In this
article, I’ll assume you are familiar with macros and how they work and I’ll
talk about another use case of macros that is rarely examined: doing
compile-time things in macros.

Elixir is frequently used in network-aware applications because of the core design of Erlang and the Erlang VM. In this context, there’s often the need to connect to external services through the network: for example, a classic web application could connect to a relational database and a key-value store, while an application that runs on embedded systems could connect to other nodes on the network.

Lexical analysis (tokenizing) and parsing are very important concepts in computer science and programming. There is a lot of theory behind these concepts, but I won’t be talking about any of that here because, well, it’s a lot. Also, I feel like approaching these topics in a “scientific” way makes them look a bit scary; however, using them in practice turns out to be pretty straightforward. If you want to know more about the theory, head over to Wikipedia (lexical analysis and parsing) or read the amazing dragon book (which I recommend to all programmers, it’s fantastic).