Cloudi is a free Erlang based private cloud for efficient processing in C++ to maximize hardware utilization with dynamic load balancing. Cloudi relies on external databases for keeping the work fault-tolerant by preserving the work data. Implementing work for the cloud is as simple as declaring the cloud interface. The presentation provides an introduction to the Cloudi framework.

Eric will talk about how to use a new continuous build system for Erlang. He will also provide an intro into working with the continuous build system to automatically detect changes in source, build, test and publish OTP applications and releases. This will allow you to start getting the benefits of easy and straightforward continuous build in your Erlang projects.

Generally speaking, servers and clients in Erlang are implemented as named functions in named modules. Similarly, processes communicate via messages that have a statically-known structure, and specifically, with static tags, that serve as the "names" of the messages. This exposes a great deal of information about an Erlang application: The names of the modules, the name of the entry-point functions within the module, the "names" of the messages between the server and the client, etc. In this work, we show how higher-order functions, and some well-studied techniques from functional programming, can be used to obtain anonymity of servers and messages.

We'll start off with a mini Git tutorial to help conceptualize the problems we had at GitHub that were addressed in the crafting of BERT and BERT-RPC. If you're unfamiliar with Git, this may open your eyes to the power, flexibility, and speed of this distributed version control system. Your language is great at dealing with distributed systems, shouldn't your SCM be just as adept?

BERT (Binary ERlang Term) is a new serialization format based on Erlang's external term format. It supports rich data types such as atoms, heterogenous lists, tuples, binary data, booleans, dictionaries, and more. Just as JSON acts as an excellent inter-process data format for web-based technologies, BERT acts as an efficient inter-process data format for low-latency server technologies. Built on top of BERT is BERT-RPC, a simple, dynamic RPC protocol providing both synchronous and asynchronous requests, caching directives, streaming, and even callbacks.

Tom created these technologies to help us scale GitHub. Tom needed a fast, robust way for one process to make low-latency calls to another. Tom looked at Thrift and Protocol Buffers, but those solutions were too complex and not flexible enough to hang with Ruby. Tom also wrote Ernie, an Erlang/Ruby hybrid BERT-RPC server that makes it dead simple to write your RPC functions in Ruby (or other languages). Together, all these technologies power GitHub's new federated architecture and allow us to independently and horizontally scale both frontend and backend layers.

Disco combines the strengths of Erlang and Python to enable rapid development of massively parallel computational pipelines. Disco implements the MapReduce framework, making it a powerful platform for doing distributed computing on immense datasets.

The first step to building a system driven by data, is indexing the data in such a way that it is accessible in logarithmic or constant time. Such random access is crucial for building online systems, but also valuable in optimizing many other applications which rely upon lookups into the data.

`Discodex` builds on top of Disco,abstracting away some of the most common operations for organizing piles of raw data into distributed, append-only indices and querying them. By adopting erlang-style immutability of data structures, itis possible to index and query billions of data items efficiently. Discodex adopts a similar strategy to Disco in achieving this goal: making the interface so embarrassingly simple and intuitive, that development time is never an excuse for not building an index.

In this talk we discuss the architecture of this awesome, open-source tool (with Erlang at its heart), and how to use it. We also provide a real-world example of using Discodex for data insight at Nokia, and the reason we built it in the first place.

This talk will tell you the story about how Erlang got multicore support and will give you all the gory details about utilizing multicore processors in a conventional programming language. I'll tell you what we've done at OTP so that you, as an Erlang programmer, can sit back and enjoy the fact that you don't have to bother with such things!

Erlang was invented in the 90s to address rapid development of non-stop scalable telecoms systems. Initial requirements included massive concurrency, distribution transparency, in-service upgrades, plug-and-play expansion and high programmer productivity. A rapidly growing Open Source community is now using Erlang for scalable web services, messaging systems and cloud computing services. In this talk we will look at how Erlang is breaking out of the clusters of last century and entering today's cloud computing environments.

The preprocessing step in Erlang code compilation is largely undocumented, but very powerful. The language can be extended to include custom guards, syntax and constructs. Included in the talk are the following:

* Dynamic compilation with the erl_scan, erl_parse, epp and compile modules* Reverse engineering compiled BEAM code into forms* Preprocessing vs macros* The parse_transform compile directive and example usages like:* adding helper functions into modules that take advantage of record definitions that aren't available at runtime* performing data integrity checks by expanding custom guards into additional function clauses* Example usages of the custom_guards, dynamic_compile and excavator projects in production environments at EA

In this talk we show the 'similar code' detection facilities of Wrangler, combined with its portfolio of refactorings, allow test code to be shrunk dramatically, under the guidance of the test engineer. The talk is illustrated with examples from Open Source and commercial Erlang development projects.

Nitrogen has gained a quick and active community by providing extensive example-based documentation. In this talk, Rusty will continue this example-based approach by walking through a simple application built on Nitrogen and Riak, highlighting common patterns and best practices.

Many languages provide mechanisms for programmers to declare abstract data types (ADTs), hide the details of their implementation, and allow manipulation of these ADTs only by controlled interfaces. This information hiding strategy allows the implementation of the ADT module to be changed without disturbing the client programs. In Erlang programs structural information about ADTs is exposed by pattern matching and type inspecting built-ins, making it very hard to guarantee that changes in the ADT's implementation will not have devastating effects on client's code. We have recently extended Erlang with the ability to declare opaque terms (i.e., terms whose structure should not be inspected outside their defining module) and detect violations of their opaqueness using Dialyzer. In this talk we will present this addition to the language and its capabilities, and will show interesting examples of code that (erroneously) depended on implementation details of commonly used library modules (ETS tables, gb_sets, gb_trees, etc.).

We believe there are at least three reasons why industry uses such specifications. However, most industrial uses take advantage of the first mostly and only occasionally the second: #1 Specify bits "on the wire" in a way all parties agree. #2 API documentation: how the protocol's API works. We have found that there's a very important third reason: #3 Protocol meta-data: input for other tools (development, testing, etc.).

My company is one of several that is collectively building a custom Webmail system for a large carrier in Asia. (The initial deployment will be used by well over 1 million customers.) We definitely take advantage of reasons #1 and #2 to facilitate the multi-way project planning with several development groups and the carrier/customer. Their value in communicating both with developers and with project managers is quite large.

We would like to share with other Erlang developers our experiences of using and enhancing UBF for reason #3. Much of this work is slowly making its way into the wider world, using an MIT license and distributed via GitHub (http://github.com/norton/ubf/tree/master).

Couch DB and Tokyo Cabinet are two very interesting database managers. CouchDB is famous for its robustness, its simple document storage model, and its RESTful interface, and also for the fact that it is written in Erlang. TokyoCabinet, on the other hand, is written in C, is blazingly fast, and an interface to Mnesia already exists (tcerl via mnesiaex).

In this talk I will discuss how I used Mnesia as a frontend to these database managers and the problems I encountered while integrating it with a legacy Erlang system based on Mnesia. I will also present the results of some transaction benchmarks, and discuss some interesting features of CouchDB and TokyoCabinet.