Yesterday at the riak_core tutorial at CodeBEAMSF I was trying to implement a
leveled based backend for the key value store we were building, I was having
troubles with leveled crashing when trying to destroy it (stop and remove files
in leveled parlance), after fighting for a while I needed a smaller example
to see if it was my mistake or not.

I decided to do the smaller example and to share the process here.

First we need some erlang application to hold our leveled dependency and
configuration, let's do it by creating an erlang release with rebar3:

rebar3 new release name=lvld
cd lvld

Now that the skeleton is ready, we need to change rebar.config to add the
information to use leveled, the resulting rebar.config below, see comments:

{erl_opts,[debug_info]}.{deps,[% add leveled dependency{leveled,{git,"https://github.com/martinsumner/leveled.git",{branch,"master"}}}]}.{relx,[{release,{lvld,"0.1.0"},[lvld,% leveled needs cryptocrypto,% make sure to load leveled, don't start it, it's not an app{leveled,load},% required by leveled{lz4,load},sasl]},{sys_config,"./config/sys.config"},{vm_args,"./config/vm.args"},{dev_mode,true},{include_erts,false},{extended_start_script,true}]}.{profiles,[{prod,[{relx,[{dev_mode,false},{include_erts,true}]}]}]}.% leveled generates lots of warnings and has warnings_as_errors set, we need% to override that by copying the erl_opts field without warnings_as_errors{overrides,[{override,leveled,[{erl_opts,[{platform_define,"^1[7-8]{1}",old_rand},{platform_define,"^R",old_rand},{platform_define,"^R",no_sync}]}]}]}.

We will build a wrapper for leveled that exposes a simple kv store in apps/lvld/src/lvld_kv.erl:

In case you want to know the case for the crashing, when calling destroy on
leveled, it returns destroy as reason for gen_server stop, which doesn't seem
to make Erlang happy and it crashes the process and propagates the error.

The solution here is to just close it and remove the files myself (the
difference between close and destroy is file removal).

After giving the introduction to riak_core the teams started to work on their
projects and since they were pretty busy and didn't need much help I decided to
"participate" too by implementing an idea I had in mind for a while.

Many languages make it hard to provide websocket connections and live
connections with clients, they work on a request/response basis and/or make it
really hard/expensive to handle multiple persistent connections.

I've seen solutions that involve starting a redis server and putting usually
nodejs in the front to expose redis topics via websockets, this involves two
moving parts and for many projects, managing nodejs which they may not have
experience with.

The solution is to just start one instance of ameo or a cluster of ameos and
expose the WebSocket API to clients and the Redis API to the servers.

Servers can use their preferred Redis client library and as long as they only
use GET, PUT, DEL, PUBLISH, SUBSCRIBE and UNSUBSCRIBE it will look like they
are talking to a Redis server.

On my way back I had some extra time at the airport so I implemented a basic
web UI to play with the websocket client and to provide a reference
implementation others can use.

You can see the result in this screencast:

Implementation details:

As said earlier I use riak_core for clustering, for the Redis part I took some
modules from an Erlang implementation of Redis called edis and I created a library called edis_proto that allows any project to
expose a Redis compatible API to their servers with a couple lines of code.

We can use a gmap with an ivar as value type, an ivar is a register that can
only be set once.

GMapIVarType={state_gmap,[state_ivar]}.GMapIVarVarName=<<"gmapivar">>.GMapIVarVal1=#{what=>i_am_a_gmap_ivar_value}.GMapIVarVal2=#{what=>i_am_a_gmap_ivar_update}.{ok,{GMapIVar,_,_,_}}=lasp:declare({GMapIVarVarName,GMapIVarType},GMapIVarType).{ok,{GMapIVar1,_,_,_}}=lasp:update(GMapIVar,{apply,Key1,{set,GMapIVarVal1}},self()).% try updating it, will throw an error (the value of GMapIVar1 will be lost){ok,{GMapIVar2,_,_,_}}=lasp:update(GMapIVar1,{apply,Key1,{set,GMapIVarVal2}},self()).{ok,GMapIVarRes}=lasp:query(GMapIVar1).GMapIVarRes.[{_,GMapIVarResVal}]=GMapIVarRes.GMapIVarResVal.% #{what => i_am_a_gmap_ivar_value}

Types

My examples are all maps with some value type because that's the use case I'm
most interested, here's a list of types and their operations:

Modeled as a pair where the first component is a PNCounter and the second
component is a GMap.

This counter has sub counter for different ids (actors), each of which can't go
below 0, with this you can model things like seats or some resource where you
allocate counts to different parties (actors) and each can decrement their own
count but not others, also each counter can't go below 0, if a given actor
needs to decrement it has to move counts from other actor.

Operations:

{move, term()}

Moves permissions to decrement to another replica (if it has enough permissions)

increment

Increment counter, can always happen

decrement

Decrement counter, can happen when the replica has enough local increments,
or has permissions received from other replicas

Example:

BCountType=state_bcounter.BCountVarName=<<"bcountvar">>.Actor1=self().Actor2=<<"actor2-id">>.{ok,{BCount,_,_,_}}=lasp:declare({BCountVarName,BCountType},BCountType).{ok,{BCount1,_,_,_}}=lasp:update(BCount,increment,Actor1).{ok,BCountRes1}=lasp:query(BCount1).{ok,{BCount2,_,_,_}}=lasp:update(BCount1,increment,Actor2).{ok,BCountRes2}=lasp:query(BCount2).{ok,{BCount3,_,_,_}}=lasp:update(BCount2,decrement,Actor1).{ok,BCountRes3}=lasp:query(BCount3).% here Actor1 has counter set to 0, can't go below 0, if it want's to% decrement it has to move a 1 from another actor, Actor2 has 1 in its% counter so we will move it and then decrementBCountMoveFrom=Actor2.BCountMoveTo=Actor1.{ok,{BCount4,_,_,_}}=lasp:update(BCount3,{move,1,BCountMoveTo},BCountMoveFrom).% now we can decrement from Actor1{ok,{BCount5,_,_,_}}=lasp:update(BCount4,decrement,Actor1).{ok,BCountRes5}=lasp:query(BCount5).BCountRes1.% 1BCountRes2.% 2BCountRes3.% 1BCountRes5.% 0

Counter that allows both increments and decrements.
Modeled as a dictionary where keys are replicas ids and values are pairs
where the first component is the number of increments and the second component
is the number of decrements.
An actor may only update its own entry in the dictionary.
The value of the counter is the sum of all first components minus the sum
of all second components.

Partisan is the technology that provides Lasp's scalable cluster
membership. Partisan bypasses the use of Distributed Erlang for manual
connection management via TCP, and has several pluggable backends for
different deployment scenarios.

We first need to have erlang installed, I will show you how to setup any version
you want to use, and a way to have the version I will use for this without
affecting any other installation you may have.

You will need to add ~/bin to your PATH variable so your shell can find the
kerl script, you can do it like this in your shell:

# set the PATH environment variable to the value it had before plus a colon# (path separator) and a new path which points to the bin folder we just# createdPATH=$PATH:$HOME/bin

If you want to make this work every time you start a shell you need to put it
it the rc file of your shell of choice, for bash it's ~/.bashrc, for zsh it's
.zshrc, check your shell's docs for other shells, you will have to add a line like this:

exportPATH=$PATH:$HOME/bin

After this, start a new shell or source your rc file so that it picks up your
new PATH variable, you can check that it's set correctly by running:

echo$PATH

Building an Erlang release with kerl

We have kerl installed and available in our shell, now we need to build an
Erlang release of our choice, for this we will need a compiler and other
tools and libraries needed to compile it:

This are instructions on ubuntu 17.10, check the names for those packages
on your distribution.

There's not much we can do with our project at this stage, so we will just stop
it and exit by running the q(). function in the shell:

(akvs@ganesha)1>q().ok

Coding (and testing) the Key Value store modules

The way I usually code in erlang is to first build a stateless module that has
an init function that returns some state, all other functions expect that state
as first parameter, then those functions do something and return the state and
the result.

This modules are really easy to use in the shell and test.

This will be our first module, we will call it akvs_kv and it will have the
following API:

%% types:-typeerror()::{error,{atom(),iolist(),map()}}.-typekey()::binary().-typevalue()::any().% we don't want other modules to know/care about the internal structure of% the state type-opaquestate()::map().%% functions:%% @doc create a new instance of a key value store-specnew(map())->{ok,state()}|error().%% @doc dispose resources associated with a previously created kv store-specdispose(state())->ok|error().%% @doc set a value for a key in a kv store-specset(state(),key(),value())->{ok,state()}|error().%% @doc get a value for a key or an error if not found-specget(state(),key())->{ok,value()}|error().%% @doc get a value for a key or a default value if not found-specget(state(),key(),value())->{ok,value()}|error().%% @doc remove a value for a key, if not found do nothing-specdel(state(),key())->{ok,state()}|error().

We can also use the type specs we defined to check our code using dialyzer:

rebar3 dialyzer

Everything seems to be right, let's move on to the next step.

But before that, in case you want to generate API docs for our code taking advantage
of the edoc annotations, you can do so by running:

rebar3 edoc

And opening apps/akvs/doc/index.html with a browser.

Wrapping the state

Stateless modules are a good start and are really easy to test and use, but we
don't want to pass the burden of threading the state to the users of our code,
also we want to centralize the state management so that more than one process
can call our module and see the state changes of other callers.

In this case we are using ETS to make it simpler but if our kv was backed by a
map, or if we had some kind of cache, then state management would become really
important to get right, otherwise the results seen by each caller would
diverge.

To manage the state of our module we are going to wrap it in a process, a gen_server in this case.

The module will be called akvs_kv_s (_s for server, don't know if there's a
convention for it).

The module is a basic gen_server that exposes a couple functions to call
the kv API from the akvs_kv module, you can read the code here: akvs_kv_s.

We write tests for this module too, you can read the test's code here: akvs_kv_s_SUITE.

Run the tests:

rebar3 ct

An API for our key value stores

Now we can spawn a key value store in a gen_server and apply operations to it,
but like with the stateless module, someone has to keep a reference to the
process and provide a nicer way to find and operate on our key value stores, if
it was only one it's easy to just start it as a registered process with a name
and send messages to it by it's name, but in our case, we want to provide
namespaces where each namespace holds a key value store of its own.

The abstract API or this module should be like this:

-typens()::binary().-typekey()::akvs_kv:key().-typevalue()::akvs_kv:value().-typeerror()::akvs_kv:value().%% @doc set Key to Value in namespace Ns-specset(ns(),key(),value())->ok|error().%% @doc get Key from namespace Ns-specget(ns(),key())->{ok,value()}|error().%% @doc get Key from namespace Ns or DefaultValue if Key not found-specget(ns(),key(),value())->{ok,value()}|error().%% @doc delete Key in namespace Ns-specdel(ns(),key())->ok|error().

Right now we are going to solve the problem of who keeps the namespace to
process mapping really simple so we can continue, we are going to setup a
public ETS table at application startup and lookup the processes by namespace
there, if not found we are going to start the process and register it under
that namespace.

This solution is not recommendable at all but it will allow us to continue and
since the API doesn't know a thing about the way we register/lookup namespaces
we can explore different alternatives later.

You can view the source code for akvs module here: akvs and the tests here akvs_SUITE.

An HTTP API for our key value stores

We are at the point where we can expose our APIs to the world, we are going to
do it by exposing a really basic HTTP API for it.

Now you can upload akvs.tar.gz to any bare server and start akvs there, as long
as the operating system is similar (better if the same) as the one where you
built the release, this is because when building the release we bundle the
erlang runtime for simplicity, this assumes specific versions of libraries like
libssl which may not be available on the target system if it's too different.

Another way is to build the release without bundling the erlang runtime and
having it available on the target system, just make sure that the erlang
runtime in the target system has the same version you used to build it,
otherwise you may experience errors due to modules/functions not being
available or bytecode incompatibility if the target runtime is older than the
one used for the release.

Intro

You can't improve what you don't measure, and since I think there are areas
in the BEAM community (Erlang, Elixir, LFE, Efene, Alpaca, Clojerl et al.) to
improve we need to have a better picture of it.

That's why some months ago I decided to create this survey, I told to some
people and started researching other "State of the X Community" yearly surveys,
I wrote some draft questions and published to some people for feedback, after a
couple of rounds I made a Form and ran a test survey for more feedback, after a
couple dozen answers I cleared the results and announced it publicly with a
weakly reminder on multiple channels.

Result Analysis

We got 423 Responses up to this point.

I present the results of the State of the BEAM Survey 2017 here in two ways:

Bar charts sorted by most answers to less

On questions with many answers I make a cut at some point

Raw data tables sorted by most answers to less

Here I did some consolidation of answers to avoid making them too large

I was thinking on doing a deep analysis on the answers but later I realized
that if I did an analysis many people would read mine and avoid analyzing it
themselves in detail.

Instead I decided to open an analysis thread in some forum and later maybe
summarize the most interesting comments.

To ease the discussion I will do some light observations where I see it makes
sense and make some questions to open the discussion.

Before diving into the result I want to make explicit two things that may make
the results less representative than they should:

1. The "Elixir Effect"

I think the Elixir community is bigger or at least more active than the rest
of the BEAM community, because of that and the fact that Elixir already has
its own survey, I decided not to promote this survey there, to avoid the number
of Elixir specific answers to skew the results and make this survey just be
yet another Elixir survey with some BEAMers also replying.

With this clarification, and looking at the answers, I can identify some answers
that are from Elixir-only developers, you can see that when some Elixir
specific tools appear in the answers (Mix, ExUnit, Distillery, deploy to Heroku
etc.), just keep that in mind when analyzing the results.

2. The "Survivorship Bias Effect"

Survivorship bias or survival bias is the logical error of concentrating on
the people or things that made it past some selection process and
overlooking those that did not, typically because of their lack of
visibility. This can lead to false conclusions in several different ways.
It is a form of selection bias.

Survivorship bias can lead to overly optimistic beliefs because failures
are ignored, such as when companies that no longer exist are excluded from
analyses of financial performance.

The damaged portions of returning planes show locations where they can take
a hit and still return home safely; those hit in other places do not
survive.

This survey is done on people that wanted to learn Erlang, learned it, and
are still active enough on the community to see the survey announcement.

This means that the answers are from the ones that "survived", which makes it
really hard to get good feedback on the bad parts of the language, tooling and
community since the most affected by it aren't going to stay around to fill
this survey.

How to reach those? I don't know, propose solutions on the discussion.

I forgot to ask if I could make public the name of the companies so I won't,
but I can say that I got 202 responses and most of them are not duplicates.

Things to improve for next year

Ask users if they want their answers available to be distributed in raw form for others to analyze

Ask users if I can share publicly the name of the company where they use Erlang

Decide what to do about Elixir-only replies, maybe make a question about it

Make specific questions regarding better tooling

I forgot Russia and Central America options, maybe next time do Latin America?

Let's see the results!

Which languages of the BEAM do you use?

Clearly Erlang is the most used language, ignoring the Elixir Effect, I'm kind
of disappointed by the lack of users trying alternative languages. More so
given the fact that many of the complaints or requests in other questions are
already solved by other languages in the ecosystem, for example "better macros"
or lisp inspired features being solved by LFE, static/stronger typing or better
static analysis being solved by Alpaca, Elixir's pipe operator and a more
mainstream syntax being solved by Efene.

My advice to the community, try the other languages, blog/tweet about it and
share feedback with their creators, there's a language for each taste!

Erlang

326

54.42%

Elixir

231

38.56%

LFE

14

2.34%

Luerl

12

2.00%

Alpaca

9

1.50%

Clojerl

4

0.67%

Erlog

1

0.17%

Efene

1

0.17%

PHP

1

0.17%

How would you characterize your use of BEAM Languages today?

Many people using it for serious stuff, the Open Source answer is really low
here but is contradicted by another answer below.

I think I should add another option for something like "experiments", "try new
ideas".

I use it at work

327

48.66%

I use it for serious "hobby" projects

245

36.46%

I'm just tinkering

62

9.23%

I use it for my studies

35

5.21%

Learning

1

0.15%

katas

1

0.15%

Open Source Software

1

0.15%

In which domains are you applying it?

Distributed Systems

225

15.20%

Web development

214

14.46%

Building and delivering commercial services

172

11.62%

Open source projects

149

10.07%

Network programming

136

9.19%

Enterprise apps

92

6.22%

Databases

80

5.41%

IoT / home automation / physical computing

75

5.07%

System administration / dev ops

60

4.05%

Big Data

51

3.45%

Mobile app development (non-web)

46

3.11%

Research

33

2.23%

AI / NLP / machine learning

28

1.89%

Games

28

1.89%

Math / data analysis

23

1.55%

Scientific computing / simulations / data visualization

21

1.42%

Desktop apps

14

0.95%

Graphics / Art

4

0.27%

Music

3

0.20%

Industrial Automation

2

0.14%

log system

1

0.07%

videostreaming

1

0.07%

soft real time analytics

1

0.07%

Security Event Processing

1

0.07%

Media encoding and distribution

1

0.07%

Ad delivery

1

0.07%

Telecom Apps

1

0.07%

telecom and chat

1

0.07%

video

1

0.07%

Developer Tooling

1

0.07%

Telecommunications

1

0.07%

embedded systems

1

0.07%

Advertising/RTB

1

0.07%

Prototyping network apps

1

0.07%

Real time systems

1

0.07%

Real-Time Bidding

1

0.07%

Instant messaging / VoIP / Communications

1

0.07%

ad traffic management

1

0.07%

REST/GraphQL API

1

0.07%

Test systems

1

0.07%

Learning

1

0.07%

telecommunications

1

0.07%

VoIP

1

0.07%

Code static analysis

1

0.07%

What industry or industries do you develop for?

Enterprise software

117

15.04%

Communications / Networking

103

13.24%

Consumer software

85

10.93%

IT / Cloud Provider

83

10.67%

Financial services / FinTech

69

8.87%

Telecom

67

8.61%

Media / Advertising

46

5.91%

Retail / ecommerce

41

5.27%

Academic

29

3.73%

Healthcare

28

3.60%

Education

26

3.34%

Government / Military

22

2.83%

Scientific

16

2.06%

Legal Tech

6

0.77%

Energy

5

0.64%

Gaming

2

0.26%

HR

2

0.26%

Security

2

0.26%

Logistics

2

0.26%

sports/fitness

1

0.13%

Retired

1

0.13%

Sport

1

0.13%

Business Intelligence

1

0.13%

Telematics / Car industry

1

0.13%

Manufacturing / Automotive

1

0.13%

Cultural/Museum

1

0.13%

Utilities

1

0.13%

Open source

1

0.13%

Travel

1

0.13%

Sport analysis

1

0.13%

Fitness

1

0.13%

Online Games

1

0.13%

Automotive

1

0.13%

Marketing

1

0.13%

Real estate

1

0.13%

Consumer electronics

1

0.13%

Non profit

1

0.13%

Client driven

1

0.13%

Industrial IoT

1

0.13%

Electric utility

1

0.13%

SaaS

1

0.13%

Automobile

1

0.13%

energy sector

1

0.13%

utilities

1

0.13%

Recruitment

1

0.13%

Energetics

1

0.13%

How long have you been using Erlang?

The entrants (1 year or less) being less than 2 and 3 years may be discouraging
or maybe as a sign that this survey didn't reach as many newcomers as it
should.

> 6 Years

116

27.62%

2 Years

76

18.10%

3 Years

58

13.81%

1 Year

52

12.38%

Less than a year

45

10.71%

5 Years

36

8.57%

4 Years

34

8.10%

I've stopped using it

3

0.71%

What's your age

Similar to the previous one, the survey shows that we are not interesting to
young programmers (or this survey is not interesting to them :)

30-40

179

42.42%

20-30

112

26.54%

40-50

93

22.04%

> 50

31

7.35%

< 20

7

1.66%

What's your gender

One I was expecting, but bad nonetheless.

Male

401

95.02%

Prefer not to say

15

3.55%

Female

5

1.18%

attack helicopter

1

0.24%

Where are you located?

North America

127

30.09%

Western Europe

117

27.73%

Eastern Europe

42

9.95%

Northern Europe

39

9.24%

South America

30

7.11%

Asia

25

5.92%

Oceania

11

2.61%

Russia

7

1.66%

India

6

1.42%

China

6

1.42%

South Saharan Afica

3

0.71%

Middle East

2

0.47%

Europe

1

0.24%

Iran

1

0.24%

Central America

1

0.24%

Australia

1

0.24%

Thailand

1

0.24%

East Africa

1

0.24%

Central Europe

1

0.24%

What is your level of experience with functional programming?

7 answers got the joke or are really awesome programmers :)

Intermediate

202

48.44%

Advanced

148

35.49%

Beginner

57

13.67%

Profunctor Optics Level

7

1.68%

None

3

0.72%

Prior to using Erlang, which were your primary development languages?

C or C++

163

14.75%

Python

145

13.12%

Javascript

144

13.03%

Ruby

138

12.49%

Java

135

12.22%

PHP

72

6.52%

C#

56

5.07%

Perl

46

4.16%

Go

26

2.35%

Haskell

25

2.26%

Swift or Objective-C

24

2.17%

Common Lisp

20

1.81%

Scala

20

1.81%

Scheme or Racket

14

1.27%

Visual Basic

11

1.00%

Clojure

8

0.72%

R

8

0.72%

Rust

7

0.63%

None

6

0.54%

OCaml

3

0.27%

F#

3

0.27%

Kotlin

2

0.18%

Standard ML

2

0.18%

Fortran

2

0.18%

Pascal

1

0.09%

Ocaml

1

0.09%

KDB

1

0.09%

so "primary" here for me is "what was most used at work"

1

0.09%

TypeScript

1

0.09%

Microsoft Access

1

0.09%

Groovy

1

0.09%

but I am a self-proclaimed polyglot

1

0.09%

Shell

1

0.09%

Tcl/Tk

1

0.09%

Limbo

1

0.09%

Smalltalk

1

0.09%

clojure

1

0.09%

ActionScript

1

0.09%

Actionscript

1

0.09%

Prolog

1

0.09%

Racket

1

0.09%

Bash

1

0.09%

ML

1

0.09%

TCL

1

0.09%

Elixir

1

0.09%

C ANSI POSIX

1

0.09%

D

1

0.09%

ocaml

1

0.09%

Assembly

1

0.09%

Which client-side language are you using with Erlang?

Javascript

257

44.93%

None

90

15.73%

Elm

69

12.06%

Java

36

6.29%

Swift/Objective-C

36

6.29%

Clojurescript

13

2.27%

ReasonML/Ocaml

10

1.75%

Kotlin

8

1.40%

Typescript

7

1.22%

Scala

7

1.22%

Purescript

6

1.05%

C++

4

0.70%

TypeScript

3

0.52%

Go

2

0.35%

typescript

2

0.35%

Python

2

0.35%

Erlang

2

0.35%

Flow + Javascript

1

0.17%

HTML-CSS

1

0.17%

Haskell

1

0.17%

What do you mean by "client-side language"?

1

0.17%

other

1

0.17%

Action Script 3

1

0.17%

Coffeescript

1

0.17%

d3.js

1

0.17%

lua

1

0.17%

Python/PyQt

1

0.17%

Dart

1

0.17%

Golang

1

0.17%

Ruby

1

0.17%

M$ C#

1

0.17%

Python (interface to legacy system - not web based)

1

0.17%

clojure

1

0.17%

C#

1

0.17%

Tcl/Tk

1

0.17%

In your Erlang projects, do you interoperate with other languages? if so, which ones?

C or C++

156

24.19%

None

92

14.26%

Python

87

13.49%

Javascript

72

11.16%

Java

51

7.91%

Ruby

37

5.74%

Rust

27

4.19%

Go

27

4.19%

Swift or Objective-C

14

2.17%

C#

12

1.86%

Scala

11

1.71%

PHP

9

1.40%

Perl

8

1.24%

R

8

1.24%

Haskell

6

0.93%

Common Lisp

4

0.62%

Clojure

3

0.47%

OCaml

3

0.47%

Elixir

2

0.31%

Scheme or Racket

2

0.31%

Bash

2

0.31%

Kotlin

1

0.16%

KDB

1

0.16%

I use Erlang from Elixir

1

0.16%

lua

1

0.16%

SQL

1

0.16%

java

1

0.16%

Ocaml

1

0.16%

go

1

0.16%

Not directly via NIFs/ports but via HTTP/rabbit with ruby

1

0.16%

Tcl/Tk

1

0.16%

Lua

1

0.16%

python

1

0.16%

Which is your primary development environment?

I thought Emacs would win here by the fact that the Erlang creators use Emacs.

We setup just one route that is handled by the cadena_h_keys module, it's a
plain HTTP handler, no fancy REST stuff for now, there we handle the request on
the init/2 function itself, we pattern match against the method field on the
request object and handle:

To avoid having to configure this in sys.config we will define a cuttlefish
schema in config.schema that cuttlefish will use to generate a default config
file and validation code for us.

We have to replace the variables from variable overrides in our config.schema
file for each release before it's processed by cuttlefish itself, for that we
use the template directive on an overlay section on the release config.

Be careful with how you quit attached consoles in production systems :)

Configure Prod and Dev Cluster Releases

Building Prod Release

We start by adding a new section to rebar.config called profiles, and define
4 profiles that override the default release config with specific values,
let's start by trying the prod profile, which we will use to create production
releases of the project:

The results of the commands run "as prod" are stored in the prod folder.

You will notice if you explore the prod/rel/cadena folder that there's a folder
called erts-8.3 (the version may differ if you are using a different erlang
version), that folder is there because of the include_erts option we overrided
in the prod profile.

This means you can zip the _build/prod/rel/cadena folder, upload it to a server
that doesn't have erlang installed in it and still run your release there.

This is a good way to be sure that the version running in production is the
same you use in development or at build time in your build server.

Just be careful with deploying to an operating system too different to the one
you used to create the release becase you may have problems with bindings like
libc or openssl.

Joining the Cluster Together

Until here we built 3 releases of the same code with slight modifications to
allow running a cluster on one computer, but 3 nodes running doesn't mean
we have a cluster, for that we need to use what we learned in the Multi-Paxos with riak_ensemble Part 1 but now on code and not interactively.

join([NodeStr])->% node name comes as a list string, we need it as an atomNode=list_to_atom(NodeStr),% check that the node exists and is alivecasenet_adm:ping(Node)of% if not, return an errorpang->{error,not_reachable};% if it replies, let's join him passing our node referencepong->riak_ensemble_manager:join(Node,node())end.create([])->% enable riak_ensemble_managerriak_ensemble_manager:enable(),% wait until it stabilizeswait_stable().cluster_status()->caseriak_ensemble_manager:enabled()offalse->{error,not_enabled};true->Nodes=lists:sort(riak_ensemble_manager:cluster()),io:format("Nodes in cluster: ~p~n",[Nodes]),LeaderNode=node(riak_ensemble_manager:get_leader_pid(root)),io:format("Leader: ~p~n",[LeaderNode])end.

We also need to add the riak_ensemble supervisor to our supervisor tree in cadena_sup:

init([])->% get the configuration from sys.configDataRoot=application:get_env(riak_ensemble,data_root,"./data"),% create a unique path for each node to avoid clashes if running more% than one node in the same computerNodeDataDir=filename:join(DataRoot,atom_to_list(node())),Ensemble={riak_ensemble_sup,{riak_ensemble_sup,start_link,[NodeDataDir]},permanent,20000,supervisor,[riak_ensemble_sup]},{ok,{{one_for_all,0,1},[Ensemble]}}.

Before building the dev cluster we need to add the crypto app to cadena.app.src
since it's needed by riak_ensemble to create the cluster.

Now let's build the dev cluster, I created a Makefile to make it simpler: