Gray and Reuter define consistency as: “A transaction is a correct transformation of the state. The actions taken as a group do not violate any of the integrity constraints associated with the state. This requires that the transaction be a correct program.”

Loosely translated, you should be happy with yourself. In your own opinion, you should be sane.

Through the years, this most ill defined property of the ACID transaction has been conflated with isolation and interpreted through the prism of read/write updates to an apparently centralized record oriented database. This suits systems developers that need a narrow definition to feel they can accomplish something. Unfortunately, it leaves many real world problems in the dust.

In this talk, we will explore the models of consistency and the facades of reality used in practical systems.

Timing is everything. If a system does not respond in a timely manner then: at best, its value is greatly diminished; and at worst, it is effectively unavailable. Reactive systems need to meet predictable response time guarantees regardless of load or datasets size, even in the presence of burst traffic and partial failure conditions.

In this talk we will explore what is means to be responsive and the fundamental design patterns required to meet predictable response time guarantees. Queueing theory, Little’s Law, Amdahl’s Law, Universal Scalability theory – we’ll cover the good bits. Then we’ll explore algorithms that work with these laws to deliver timely responses from our applications no matter what gets thrown at them.

Microseconds in high-frequency trading or milliseconds in web apps, its all the same design principles.

In order to operate 24/7 an application must embrace constant change and failure. This kind of resiliency is achievable through the application of reactive design principles. Learn the theory via real-world examples at Netflix along with some lessons learned the hard way in production. Topics of interest will include service-oriented architectures (microservices), cloud computing, where to put application state, hot deployments, bulk heading, circuit breakers, degrading gracefully, operational tooling and how application architecture affects resilience.

Building highly-available and fault-tolerant distributed systems is hard enough, but making them elastic is even harder. Elastic distributed systems enable operators to reduce costs (by deallocating resources when they are unnecessary) and increase overall throughput (by reallocating resources where they can be used more effectively).

In this talk I’ll introduce elasticity with concrete examples from parallel computing before exploring some of the fundamentals of building elastic distributed systems, specifically addressing software architectural patterns that promote versus deter elasticity. In addition to what it takes to build elastic distributed systems I’ll discuss the compute infrastructure necessary for supporting them, drawing on some of the primitives provided by Apache Mesos for illustration.

Modern CPU architectures are designed as message passing systems with fast and fat on-chip networks. Many of the common abstractions held up as models of great interactions break down horribly in message passing systems when used at scale because they are, in reality, too tightly coupled when communication delay can’t be ignored.

But here is a secret, it’s all about the protocols. What can we learn from protocol design that could make being message-driven easier? What are some of the common pitfalls and best practices to creating resilient protocols? And what does it really mean to be message driven?

Studying latency behavior can tell us a lot about a system. It can help us focus our efforts on the parts that matter most for keeping systems reactive and responsive. It can save us time and lead to better systems. But it is an often overlooked part of system modeling, development, testing, and monitoring efforts.

In this talk, Gil Tene will discuss various characteristic behavior patterns of latency behavior, and show what we can learn about systems by simply observing their latency behavior in detail, and in the context that matters to the system at hand. The latency involved in "from stand-still" reaction to a single event is different from the latency involved in processing a message coming off of a hot stream. The latency behavior experienced when waiting in line in a queue is different from the one seen when traversing a constant length operation or physical distance. We'll discuss the implications of these and other characteristics on the way system behave in the real world, and on the information we need to gather when testing systems in order to understand their reaction and responsiveness.

This is where you get the chance to mingle with the speakers and like-minded technology folks, and to ask all those silly questions you were too embarrassed to ask during the day. Remember that React is a deliberately small, informal conference, allowing you to chat, mingle with the speakers.

Wednesday 19th November 2014

The idea of the present is an illusion. Everything we see, hear and feel is just an echo from the past. But this illusion has influenced us and the way we view the world in so many ways; from Newton’s physics with a linearly progressing timeline accruing absolute knowledge along the way to the von Neumann machine with its total ordering of instructions updating mutable state with full control of the “present”. But unfortunately this is not how the world works. There is no present, all we have is facts derived from the merging of multiple pasts. The truth is closer to Einstein’s physics where everything is relative to one’s perspective.

As developers we need to wake up and break free from the perceived reality of living in a single globally consistent present. The advent of multicore and cloud computing architectures meant that most applications today are distributed systems—multiple cores separated by the memory bus or multiple nodes separated by the network—which puts a harsh end to this illusion. Facts travel at the speed of light (at best), which makes the distinction between past and perceived present even more apparent in a distributed system where latency is higher and where facts (messages) can get lost.

The only way to design truly scalable and performant systems that can construct a sufficiently consistent view of history—and thereby our local “present”—is by treating time as a first class construct in our programming model and to model the present as facts derived from the merging of multiple concurrent pasts.

In this talk we will explore what all this means to the design of our systems, how we need to view and model consistency, consensus, communication, history and behavior, and look at some practical tools and techniques to bring it all together.

Data is moving. Data has always been on the move, the fact that when using computers we often need data to stand still in order to do something with it is usually a reflection of our lack of skill rather than of the data itself. So how do we query fast moving data? The normal rules do not apply - or do they?

In this talk I will discuss event stream processing, particularly the challenges in processing high-velocity, high-throughput streams of data and some of the solutions that people have tried. We will also look at some of the theoretical underpinnings of stream processing, the challenges around high availability, transactionality and integration with semi-static data sources. We’ll also touch on the current buzz around “Fast Data”, Big Data’s amped cousin, and what all this means for Hadoop and other batch-oriented systems.

Reactive user interfaces need to scale from handling dozens to sometimes hundreds of updates per second, to changes in data on a daily or weekly basis, as well as handling input from users. This means that literally everything is a stream of data.

We will discuss and demonstrate how the trading applications we've built make reactivity a first-class concept to compose these streams to provide real-time information about the state of the market and the platform. We'll talk about building reactive applications that handle requests that time out, data that gets out of sync, and servers that fail - all elegantly and without refreshing the page.

The Reactive Manifesto's *Resilient* trait says that a system must stay responsive in the face of failure. I'll discuss how various systems approach failure handling. I'll start with the theoretical foundation of *Communicating Sequential Processes* (CSP) and two modern systems inspired by it, the Go language and Clojure's core.async library. Then I'll examine failure handling in modern implementations of the Actor Model, which is dual to CSP (I'll explain what that means), as well as examine failure handling in implementations of Functional Reactive Programming (FRP) and Reactive Extensions (Rx).

So, you're building responsive and resilient applications, scaling to deal with an ever expanding firehose of events arriving at your front door. You're filling storage by the terabyte without even trying, and that needs to be resilient, and responsive, and scalable too. So obviously you're storing your data using... well... what? Is there really a single technology that meets all your needs for persistence? And are the 'conventional' technologies really a lost cause?

In this talk we'll look at some of the successful - and less successful - strategies for managing high-frequency, high-volume data. We will explore what is technically possible when you need to record millions of messages per second durably without a bottomless budget, review the common storage options and what they are capable of, and also look at what is possible when you're willing to roll up your sleeves and write your own storage engine.

A blueprint for software is called a specification. TLA+ specifications have been described as exhaustively testable pseudo-code. High-level TLA+ specifications can catch design errors that are now found only by examining the rubble after a system has collapsed.