Changing the allocator strategy for binary_alloc to do aobf rather than
bf (+MBas aobf). Depending on how all of the binaries are allocated,
this could make new allocations favor the same carrier. This will only
add a small cpu overhead when allocating new binaries. This could
however worsen our utilization even more if we are unlucky, it will
however make allocations faster and more in the same area in the
expected case, which should reduce fragmentation and untraceable leaks.
We're also decreasing the size of our mbcs. Right now we have smbcs set
to 256 kb and lmbcs at 5 MB (rounded up to 8MB as ERTS only allocates
multiples of 2) and an average multi-block carrier size of 7.78 MB. We
try to set +MBlmbcs 512 so that we get many more carriers and
thus increase the chance that it can be returned to the OS.
These two options have been recommended by members of the Erlang/OTP
team to reduce the passive memory leaks due to allocation patterns
compared to our peculiar use cases for log messages.

batches io:format calls into a buffer process to reduce the number
of calls required and also do overload protection automatically.
Experimental material to see if it helps with performance, given the
buffering and load shedding allow to go asynchronous on log messages
without loosing too much data.

When network or drains show bad behaviour temporarily, low timeouts we
currently have (1 second) ends up killing connections and raising the
retry count of frames. When massive losses are seen, it makes it
difficult to put the blame on logplex's speed at sending logs, or the
drains consumption (or network).
By raising the timeout a bit, we should reduce the reconnection rate and
at the same time make it harder to blame logplex (as an individual node)
for the problems.
This should not have a super significant impact on the drop rate,
however, but possibly a noticeable one.

Mochiweb branches were broken for public and test rebar config. This
comes from the migration from the mochi account to the internal heroku
account. New branches were created but the account name wasn't switched.
Recon is a library to help with devops tasks in production.

Rather than configuring specific apps in many places (bin/logplex,
bin/devel_logplex, logplex_app.erl), configurations are moved to a
sys.config file that can be loaded by adding `-config sys` to the `erl`
exectuable, or loaded automatically when generating an OTP release.

The current logplex version shows a point of contention for logs through
using io:format/2. Although it is unlikely lager will help a lot with
it given we don't log directly to disk (and this is where it shines in
comparison to other logging engines), it's worth trying to see if things
are improving with it.
Custom log formats are used to make sure the production log format
remains 100% identical to the former one. They will, however, be
different during test runs because no specific care has been taken to
make the lager config be compatible in test cases.

An inactive drain or buffer (Receives no request from the outside world)
should be sent to hibernation in order to trigger a full-sweep GC,
compact the memory of the process, and reduce the overall load of the
system, and possibly reducing memory fragmentation of the VM at the cost
of slightly more CPU when it triggers.
The timeout is implemented using the gen_fsm timeout option, which
automatically resets timeout timers when a message is processed by the
process. This should allow to generally catch any kind of inactivity and
force hibernation of the processes.
Note: it is not yet known if the timeout value of 5 seconds or the
amount of timers setup/cancellations will have an impact of any
significance on an active system or not. The values may need to be
tweaked or the effort redirected towards manual GC if refc binaries keep
on hogging the memory after this.

The logplex_msg_buffer module is used extensively by drain processes
that buffer request and need to be the least blocking possible under
heavy load. The current implementation would recalculate the entire
queue length on every call, which became both time consuming and CPU
intensive when the buffer was full, which happens when you have to count
lengths even more often.
This patch makes it so that we have an explicit counter for the buffer
so that we don't need to recalculate it all the time, lowering the
contention for runtime for a given process.
The module includes conversion clauses for all functions part of the API
so that the code can be hot-reloaded without stopping, and just adapt to
the new format.

When there's a timer being set for a reconnection, we force hibernation
in order to do a fullsweep GC of the drain processes.
This might incur a certain cost for very busy-but-disconnected
processes, forcing a short pause, but the backoff timers for
reconnections will act as rate limiters on this.

With IO being blocking for individual processes due to Erlang's IO
protocol and logplex using io:format/2 to log information, it is
possible that a node that does a lot of logging has bad tail latencies
on its API as reported by issues #49 and #51 on github.
This quickfix, pending a rewrite of the logging system to be
non-blocking and load-shedding, moves the logging outside of the
critical path for part of the requests as a whole. Some requests, such
as token creation for channels (POST /v2/channels/(\\d+)/tokens) still
contain logs in said critical path and will only see minor improvements.