About

Scala Days Berlin

Scala Days, the premier Scala Conference, will be held this year at bcc in Berlin on June 15th through 17th, 2016, starting with two days of training on June 13th and 14th at Ramada Berlin Alexanderplatz. There will also be a Scala Days US in New York on May 9th through 13th, 2016.

The conference will bring together developers from all corners of the world to share their experiences and new ideas around creating applications with Scala and related technologies, like Akka and Play Framework. Scala Days provides a unique opportunity for Scala users to interact with the contributors to the language and related technologies and connect with fellow developers.

Last year’s conferences in San Fancisco and Amsterdam were sold-out events! Leaders from Scala User Groups and communities around the globe, students and language contributors, will gather to discuss academic research, use-cases and visionary projects for a two day, action-packed event.

Martin Odersky created the Scala programming language and is a professor in the programming research group at EPFL, the leading technical university in Switzerland. Throughout his career, Martin's singular objective has been to make the basic job of writing programs faster, easier and more enjoyable. In the process, he has personally written more lines of Java and Scala code than almost any other individual in the world. He wrote javac, the compiler used by the majority of today's Java programmers, and scalac, the compiler used by the fast-growing Scala community. He authored "Programming in Scala," the best-selling book on Scala. Previously he has held positions at IBM Research, Yale University, University of Karlsruhe and University of South Australia, after having obtained his doctorate from ETH Zürich as a student of Niklaus Wirth, the creator of Pascal.

After a fairly quiet 2015, things are heating up this year. To name
just three major developments among many: A major new release, Scala
2.12, is about to be completed. The Scala Center provides a new focus
point for community collaboration. And there's a new experimental
platform, dotty, which lets us prototype designs for the next
generation of the language.

In my talk I'd like to look a bit further ahead and focus on where I
see Scala in the next 5 years, touching topics like: What is Scala's
unique identity as a programming language? How should it evolve? What
exciting new technologies are on the horizon?

Heather is a research scientist and the executive director of the Scala Center at EPFL in Lausanne, Switzerland. She recently completed her PhD in EPFL’s School of Computer and Communication Science under Professor Martin Odersky, where she contributed to the now-widespread programming language, Scala. Heather’s research interests are at the intersection of data-centric distributed systems and programming languages, with a focus on transferring her research results into industrial use.

She now oversees the newly-established Scala Center, whose goal is to jointly to spearhead community open-source development on Scala, and to improve education surrounding Scala through a series of MOOCs.

Duncan DeVore is co-author of ""Reactive Application Development"", Software Engineer at Lightbend, open source developer and frequent speaker. He has been an avid Scala developer since 2009, holds two patents for software design and led the release of one of the first large-scale Reactive applications in 2012.

After graduating with a M.Sc degree in Computer Science in 1998 at the Royal Institute of Technology Henrik Engström has been working as a consultant up until his Lightbend employment in 2011. Henrik has not only vast experience from various types of programming but also great domain knowledge within the finance, retail and e-gaming industries. Apart from his major interest, programming languages, he is also an avid Arsenal supporter, black belt in Shotokan Karate and a hobby wine connoisseur.

Henrik has during the last couple of years presented at various well-known conferences such as JavaOne, OSCON, JFokus, Scala eXchange, 33 Degrees.

Thursday (16th Jun.) 10:25

Reactive Applications are the next major evolution of the Internet. They allow for applications to be responsive, scalable and resilient by building on a fully event-driven foundation. Lightbend’s Reactive Platform, consisting of the Play Framework, the Akka middleware and the Scala programming language embraces this new programming paradigm which allows developers to write interactive applications that are always available and which adapt to changing load by being distributed by design.While the reactive approach enable us to build highly scalable and resilient applications it also introduces new challenges in how to monitor them. Almost every current monitoring tool relies on a stack frame based approach where using the stack trace can provide good answers to what caused the exceptional state. In message driven, or asynchronous, applications this approach no longer provides any good information. We therefore need to invent new approaches for how to monitor these types of applications. During this session we will cover the traditional monitoring approach, different possible ways of how to monitor asynchronous applications and finally show the way we have chosen to build a monitoring tool for reactive applications at Lightbend.

A principal engineer at Huawei Research Center in Moscow. I'm leading the Scalan project and Parallel Computing Competence Center

Thursday (16th Jun.) 10:25

In this talk I will report about our results with Scalan project [1]. Even though the talk will be self contained, it is also a continuation of the talk given last year in Amsterdam [2] where I focused on a high-level overview of Scalan and its goals. However, many topics were not covered and even mentioned.

The main topic of this talk is a unique meta-programming style and idioms available in Scalan for more than a year, but not yet widely known in Scala community [3]. We believe they can be successfully used in many existing and new projects.

Meta-programming in Scala is a hot topic and a lot of efforts has been put in making it first-class by introducing and developing Scala Macros [4] and LMS [5] among others.

However, meta-programming is sufficiently more complex, error prone and hard to debug than usual programming. Moreover, meta-programming experience depends on the choice of intermediate representation and supported language, in particular which data structures are used and what manipulations are defined out-of-the-box

This is clearly illustrated in Dotty [6] where similar motivations has lead to introduction of Denotations and new encoding of High-kinded Types to name a few.

Scalan's meta-programming style and idioms [3] is the result of the following design choices:
- use graph-based IR for program representation and manipulation even though it may seem harder to deal with
- focus on optimizations of the functional subset of Scala as the first priority and motivation
- allow limited usage of effectful operations, to have full control of the performance of generated code
- use standard Scala compiler and toolchain in particular Scala Macros and compiler plugin infrastructure
- exploit Scala's powerful type-level computations and DSL embedding flexibility to remain type safe most of the time

My talk is oriented to a wide audience of Scala developers who love functional programming style and still care about performance. People may find it useful to add meta-programming based performance optimization and code generation capabilities to their domain-specific libraries.

This is going to be a tutorial-like talk with REPL sessions and live examples of interesting non-trivial program transformations that can be easily composed in Scalan.

Daniel Spiewak is a software developer based out of Boulder, CO. Over the years, he has worked with Java, Scala, Ruby, C/C++, ML, Clojure and several experimental languages. He currently spends most of his free time researching parser theory and methodologies, particularly areas where the field intersects with functional language design, domain-specific languages and type theory.

Thursday (16th Jun.) 10:25

Shapeless is a remarkable framework. It gives us the power to represent astonishingly rich constraints and generalize code over very broad structural classes, but it isn't magic! The tools with which shapeless is crafted are present in your version of scalac just as much as they are in Miles Sabin's, and learning to take advantage of them unlocks a rich palette of expression otherwise untapped in the language. In this talk, we will recreate some of the major elements of shapeless, learning how to harness a fully armed and operational type system, all while avoiding any hint of macro programming! Particular focus will be given to understanding the general patterns and ideas involved, and not just the end result.

Mirco is a software engineer at Lightened. He is currently working on the Lagom framework, and has contributed to a few open source projects in the Scala ecosystem, such as the Play Framework, and the Scala IDE for Eclipse. Loves chess, good wine, and a big foosball aficionado.

Thursday (16th Jun.) 10:25

Microservices architecture are becoming a de-facto industry standard, but are you satisfied with the current state of the art? We are not, as we believe that building microservices today is more challenging than it should be. Lagom is here to take on this challenge. First, Lagom is opinionated and it will take some of the hard decisions for you, guiding you to produce microservices that adheres to the Reactive tenents. Second, Lagom was built from the ground up around you, the developer, to push your productivity to the next level. If you are familiar with the Play Framework's development environment, imagine that but tuned for building microservices; we are sure you are going to love it! Third, Lagom comes with batteries included for deploying in production: going from development to production could not be easier.

In this session you will get an introduction to the Lightbend Lagom framework. There will be code and live demos to show you in practice how it works and what you can do with it, making you fully equipped to build your next microservices with Lightbend Lagom.

Andrea Peruffo is an all around software developer with hands-on experience in delivering any kind of software system from large scale cloud systems to embedded devices. He works at Unicredit R&D pushing the limit of technology building a better world for tomorrow’s developers.

Thursday (16th Jun.) 11:35

I will present the ongoing effort to port and cross-compile (most of) Akka, one of the biggest frameworks in the Scala land, against the alternative compiler of Scala.js.

We dig into various use cases from taming UI complexity to server side concurrent code made easy on Node.Js server side and more over!

Properly handling distributed and concurrent runtimes with the very same abstraction layer will turn your world upside-down breaking the frontiers between client and server development.

The machine learning libraries in Apache Spark are an impressive piece of software engineering, and are maturing rapidly. What advantages does Spark.ml offer over the older technologies that inspired its design?

At Data Science Retreat we've taken a real-world dataset and worked through the stages of building a predictive model -- exploration, data cleaning, feature engineering, and model fitting -- in several different frameworks. We'll show what it's like to work with Spark.ml, and compare it to other widely used frameworks (in R and python) along several dimensions: ease of use, productivity, feature set, and performance.

In some ways Spark.ml is still rather immature, but it also conveys new superpowers to those who know how to use it. We hope to inspire you to join us in using and improving it.

Dmitry has been working on Scala since 2013 where he joined Martin Odersky's research lab at EPFL, working on ScalaBlitz, macro-generated collections for Scala. Since 2015, he has been working on the Dotty Compiler. He designed Mini-Phases, ported the JVM backend, implemented support for Java 8 lambdas and default methods along withvarious other parts of the compiler, including the pattern matcher, lazy vals, tail recursion transformations and parts of erasure and mixin composition.He is currently working on implementing the Dotty Linker, an optimizing compiler based on Dotty.

Thursday (16th Jun.) 11:35

Common arguments for using more elaborate type systems include safety and documentation. But we will show how more expressive type systems can be used to drive novel powerful optimizations and make the program faster. Based on this principle, we built the Dotty Linker, a whole-program optimizer that represents a breakthrough in optimization of Scala code. We will demonstrate how the linker is capable of reducing the performance overhead of commonly-used Scala features such as generic methods and classes, lazy vals, implicit conversions and closures.

Build a Recommender System in Apache Spark and Integrate It Using Akka

Willem is a technical evangelist for Info Support where he helps projects get the most out of new innovations like Machine Learning and IoT. When not helping projects get going with new technology he works on knowNow, a social knowledge management platform.

Thursday (16th Jun.) 11:35

Machine Learning to some is still very magical. The truth however is that this magic is actually much easier to use than you'd expect. Come and learn how you can use Apache Spark and Akka together to build a service that recommends items to users. In this session, I'm going to show you some of the bits that go into building a recommender system, how to actually implement one in Spark and finally how to integrate the recommender system into your application using Akka HTTP.

Jon has been involved in the Scala community for over a decade, having launched the first commercial and open-source Scala software back in 2005. Since then, he has successfully deployed Scala projects into small, medium and large businesses, and UK government, but is best known these days for his work on Rapture and as the organizer of the annual Scala World conference.

Jon has spoken on a variety of topics at dozens of Scala conferences and user groups around the world over the last five years.

Thursday (16th Jun.) 13:20

Scala combines a comprehensive array of syntactic features with a rich static type system. At the intersection between Scala's syntactic versatility and its reliable type-level constraints, there exists a narrow window of opportunity for designing APIs that are both expressive and safe; for writing code that is elegant.

We will explore this unique "elegance zone" offered by Scala, with useful real-world examples from Rapture, a collection of libraries for familiar everyday programming tasks, such as working with JSON, XML, HTML and CSV, time, internationalization, logging and I/O. Rapture's philosophy is to drive forwards the state of the art in typesafety, whilst at the same time maintaining the most intuitive syntax.

Ultimately, we will show that features of Scala like implicits and type inference offer some very exciting possibilities for developing software that provides both clearer code and offers more static guarantees, and that writing elegant code in Scala is within everyone's grasp—this is Scala's great chance to outshine other languages!

Luc has been working on the JVM since 2002. First for IBM on the Eclipse project, in the Debugger team, where he wrote the expression evaluation engine. After a few other Eclipse projects, he went to TomTom, to recreate their data distribution platform for over-the-air services. He joined Lightbend in 2011, to work on the Eclipse plugin for Scala. And then switched to the Spark team, with a focus on deployment and interaction with other frameworks.

Thursday (16th Jun.) 13:20

A reactive application doesn't live in isolation, it has to connect with the other components, and try to keep the whole system reactive.

Spark, as an element of a Fast Data architecture, is one of these components to connect to. With the addition of the back pressure support to Spark Streaming in Spark 1.5, and the other characteristics of Spark, it is simpler than before to fully integrate it in a reactive system. This talk with describe the streaming model in Spark, its support of back pressure, and show, in a demo, how to use Reactive Streams to integrate Spark Streaming and a reactive application in a reactive system.

Katrin has been a Scala functional junkie since 2012 and is daily using Scala and Typesafe at BoldRadius. She spends too much free time on unsolicited coding and is a proud Torontonian Scala fighter and co-organizer of Toronto Scala meetup.

Thursday (16th Jun.) 13:20

A 'taking to production' experience based talk about Scala.js, specifically tailored for Scala folks with chronic Javascript pain. People with and without Scala.js knowledge are equally invited. What to expect? Quick yet concise overview of moving Scala.js parts following by presentation of non coast-to-coast approach to Scala.js. Still not convinced to come to the talk? How about:

Spoiler #0: There will be no preaching to the choir. Huh? No slides about Javascript vs Scala. We all know the answer, aren't we?

Spoiler #1: Overview will be based on perks of Scala.js cross project. Crossbuilding knowledge is a power, you will see.

Spoiler #2: There is a strong presumption that a number of your frontend teammates willing to move to Scala.js is 0. You tell me if I'm wrong, I show you how ""my way or the highway"" is not your only Scala.js option.

Spoiler #3: It all will start making sense. See you on scala-js gitter.

Mark is an experienced full-stack DevOps engineer, currently working for Info Support. After finishing his masters on information security, writing an award-winning thesis on the Android permission model, he decided to focus on software development. The ideal development pipeline for Mark is equipped with continuous integration and delivery, extended test harnesses and new language that need less boilerplate and are allow a more concise syntax to focus on the application logic instead of the machine or framework logic. Mark's current learning interest are in Scala, blockchain and distributed ledgers.

Johan is working as a Java architect and competence center Java lead at Info Support. He has been working for various demanding companies where rapidly delivering quality software was very important. Currently, he is working in a DevOps team in a big financial institution in The Netherlands as a Java architect. He likes sharing his knowledge about Java, continuous delivery, DevOps, software quality, and numerous other subjects. Johan regularly writes articles and gives presentations about those subject for instance at JavaOne, Devoxx, J-Fall, JavaLand, JBCNConf, Java Forum Nord, Coding Serbia, JavaCro and ConFESS.

Thursday (16th Jun.) 13:20

Normally, we use Java or Scala to build applications for large organizations running on servers. We wanted to find out if we could use the same languages and tools on IoT hardware. We also wanted to investigate whether or not (remote) actors could replace REST endpoints. The Lego trains are equipped with a Raspberry Pi, camera, wireless dongle, infrared transmitter, speaker, RFID reader, and battery pack. Next to that, we have automated switch tracks a Lego Ferris Wheel and camera's again with the help of Raspberry Pi's. We also built some lightning effects with LEDs controlled by Particle Photon’s. To control the trains and other parts, we built a remote actor based application with Scala, Akka, Akka HTTP and AngularJS. We will show you when and how to use Akka HTTP and remote actors. We will also show the results of the performance tests we did to compare the two options. Next to that we will talk about our experiences and challenges and of course we will give a live demo!

Heiko Seeberger is Fellow at codecentric and an internationally renowned expert on Scala and Akka. He has more than 20 years of experience in consulting and software development. Heiko tweets under @hseeberger and blogs under heikoseeberger.de.

Thursday (16th Jun.) 14:30

Akka is a toolkit for building elastic and resilient distributed systems and Docker makes shipping and running those distributed systems easy as never before. In this talk we will briefly introduce you to the basics of Akka and Docker and then show how you can use these innovative technologies to build and run really Reactive micorservices. Don’t expect too many slides, but be prepared for live demos.

I am one of the co-founders of SoftwareMill, where I code mainly using Scala and other interesting technologies. I am involved in open-source projects, such as Macwire, Supler, ElasticMQ and others. I have been a speaker at major conferences, such as JavaOne, Devoxx and ScalaDays.

Apart from writing closed- and open-source software, in my free time I try to read the Internet on various (functional) programming-related subjects, any ideas or insights usually end up on my blog: http://www.warski.org/blog

Thursday (16th Jun.) 14:30

Event sourcing is a great alternative to traditional "CRUD"-type architectures. The central concept is a persistent stream of events, which drives all changes to the read model and running any kind of business logic. There’s a lot of technical and business benefits to such an approach, such as being able to re-create the state of the system at any point in time, or keeping a detailed *audit log* of all actions. Typically, implementations of event sourcing are presented using a NoSQL data storage, which is great for many use cases (e.g. using Akka Persistence + Cassandra). However, nothing stops us from using a relational database and SQL! In many applications (especially “enterprise”), this brings many benefits, such as powerful and familiar query capabilities and higher guarantees around data consistency. In this mainly live-coding talk we’ll see one way of implementing transactional event sourcing using the 'slick-eventsourcing' micro-framework, introducing the core concepts: command handlers, read model updates and event listeners, and how to use them to build an event-sourced application. We’ll see how Slick’s 'DBAction' and Scala flexibility makes it possible to provide an elegant DSL to build the system from simple functions with minimum dependencies.

For over thirty years, Michael has designed, developed, shipped and consulted on software development projects for clients of all shapes and sizes. As an advocate for software craftsmanship with expertise in project management and architecture, Michael was amongst the earliest of adopters of the Typesafe Stack, with over 5 years experience working with Scala, Akka and Spray.

Thursday (16th Jun.) 14:30

One of the great benefits of the Lightbend ecosystem is its ability to empower great scalability, and it is often considered for this very reason.Indeed, if employed correctly, Scala, Akka and the rest can make it possible to much more easily write systems that scale up (and out) to extremes. However, not every design is able to scale, and the approach and architecture can sharply limit this option, no matter how good the tools. What does ""perfect scalability"" look like? Is there even such a thing? Can systems be designed to scale virtually without limit? Where does the pattern break down, and what can you do about it?

In this session, we will examine the practical architectural constraints on systems intended to scale up in near-linear fashion. It turns out that having the ability to scale to extreme load is more about what not to do than it is about what to do. Why do some system scale up and others don't? How can you take a design that has problems and refactor to a design that has fewer limits? Our teams encounter these problems every day, and we keep track of what works and what doesn't in the real world. What have we seen in the field? What proven real-world design and architecture approaches can we use to help ensure our systems can scale, and what should we avoid? How can we leverage the power of the Lightbend platform to architect such systems? Which parts for which kinds of problems? How does Akka and the actor model fit these patterns? How does Spark, Kafka, Akka streaming?

Highly scalable systems often put together tools such as Docker, Ansible, Salt, Mesos, ConductR with techniques such as microservices, monitoring, and continious delivery. Automation of the deploy pipeline and deep performance monitoring are essential parts of highly scalable systems - we will look at examples of such systems, and which of the tools in the stack they employ, and why. In this talk, we'll examine both the practices and the pitfalls, and what combinations we have used to scale not only within single clusters, but across data centers for continent-spanning applications where required.We will use use-cases all the way from massive IoT scale wearable devices all the way to high finance, and find the commonalities in the solutions that don't limit their own expansion. If your organization needs to take your scalability to significantly higher levels, this talk will be where you need to start.

Manuel Bernhardt is an independent software consultant with a passion for building web-based systems, both back-end and front-end. He is the author of "Reactive Web Applications" (Manning), and he started working with Scala, Akka and the Play Framework in 2010 after spending a long time with Java. He lives in Vienna where he is co-organiser of the local Scala User Group. He is enthusiastic about the Scala-based technologies and the vibrant community and is looking for ways to spread its usage in the industry. He has also been scuba-diving since age 6 and can’t quite get used to the lack of sea in Austria.

Thursday (16th Jun.) 14:30

In this talk/live-coding session, we will have a practical look at how a reactive web application is different from a "normal" one and how resilience, elasticity and responsiveness translate into code. We will start by having a quick theoretical introduction to asynchronous computation and then build, run, deploy and load-test a small reactive web application built with the Play Framework, exploring a few key concepts such as Futures, Actors and Circuit Breakers along the way.

Carlos is the Scala Performance Lead Engineer at the Intel Big Data Performance Group. Previous speaking experience includes Spark Summit 2015 (substituting Eric Kaczmarek) . He has extensive experience with Scala, Spark, Hadoop, JVM performance optimization and characterization. Previously he was part of Intel Atom Architecture Group and held a Master degree in Computer Engineering from Columbia University.

Thursday (16th Jun.) 15:40

Scala is the fastest expanding language in the datacenter, big data, and cloud computing. However the lack of standardized Big Data Scala benchmarks has hindered extended performance improvements related to the datacenter. In this talk we will present a newly developed Scala Big Data Benchmark based on learnings from TCPx-BB, SparkBench, and genomic (GATK/ADAM) workloads. We'll also introduce the most impacting big data-related performance improvements for Scala developed by our group.

Tim Soethout is a functional programming purist in any language. During his studies, he worked a lot in Haskell using the concepts of Functional Programming. In the enterprise world, Scala is his next interest since it is an ideal mix of mainstream programming and function programming. As a true developer, he does not want to do things twice; automation is key. Lately, he has been focussing on making Scala accessible for everyone inside the enterprise by making sure all road bumps, such as training, tooling and support, are taken care of.

Thursday (16th Jun.) 15:40

At the keynote at Scala Exchange, Jessica Kerr mentioned that there is very much documentation for starting and expert Scala developers but (almost) nothing in between. In this talk I want to demystify implicits. Implicits are a fairly advanced feature and a very important aspect of the Scala language. They help with writing concise code and enable lots of DSLs. On the other side, they can be very magical for an untrained eye. In this talk, we will delve into idiomatic use cases of implicits and how the Scala compiler resolves them. If time allows, we will jump into type classes as well.

Stefan Zeiger is the tech lead for Slick. He joined Lightbend in 2011 after developing ScalaQuery, the predecessor to Slick, in order to work on the new project full-time. He has been a user of Java and the JVM platform professionally since 1996, working on a diverse range of projects from web servers to GUI frameworks and programming language design, and moving on from Java to Scala since 2008. He is a frequent speaker at Scala Days and other conferences.

Thursday (16th Jun.) 15:40

This talk gives an overview of the "lifted embedding" at the core of the Scala DSL in Slick, Lightbend's relational database library. With standard Scala language features we can provide a DSL that allows you to work with database tables using a syntax similar to Scala collections. Of particular interest are abstractions for record types, such as tuples. While earlier versions of Slick had to get by with custom types for all tuple sizes to represent flat tuples of individual columns, the "Shape" abstraction paved the way for using standard Scala tuples in the DSL, with arbitrary nesting. The same abstraction is used to support Slick's own HList implementation, and with only a few lines of code, your own custom record types or other HList implementations. The core language feature behind this design is called "functional dependencies". It was added to Scala in release 2.8 to be able to implement the "CanBuildFrom" abstraction in the new collections framework.

Noel started out as a Java developer in finance before moving to functional programming in startups for both games and social media. More recently, he has worked in broadcast media, and is currently working in London at 47 Degrees.

Thursday (16th Jun.) 15:40

Typeclasses are a hidden gem of the Scala language. They provide an immense power not seen in imperative languages, and so their approach might be unusual or alien to those approaching Scala from an imperative background. I will show how typeclasses allow developers to effectively attach their own interfaces to code written by others. In this talk, I describe what a genetic algorithm is and provide an example implementation in Scala. Using this implementation, I will demonstrate how to define a specific typeclass for our problem. I will then derive several different implementations, showing how to get rock solid confidence in testing our algorithm - with the help of ScalaCheck - and then provide a completely different typeclass to provide a fun, visual and creative solution, illustrating the iterations and improvements as the genetic algorithm’s fitness function runs. The talk will be particularly hands-on, with plenty of examples run directly from the REPL.

You may have seen the @inline and @specialized annotations used in Scala code, and have some idea that they are added for performance reasons. But the exact details of what they do is not widely known, and it's hard to estimate whether they will provide a real performance benefit to your code. This talk will: explain exactly what the annotations do; provide some examples of how to use them; and use benchmarks to explore how they affect performance. There will also be honourable mentions for some of the more esoteric Scala annotations such as @elidable, @strictfp, @switch and @varargs.

Aleksandar is a computer science researcher with 7 years of academic and industrial experience. He has published 12 peer-reviewed research publications, led multiple software projects, supervised EPFL-hosted open source projects with external funding from Google, and organised a highly successful Coursera massive open online course on reactive programming. He has participated in several international collaborations with top universities and industrial partners. Moreover, he has participated in 17 industrial and research conferences and meetups, held 6 invited talks, and authored a textbook on concurrent programming in Scala. Finally, since 2004, he has been employeed as a software engineer at Google.

Thursday (16th Jun.) 16:50

The actor model is one of the de-facto standards when it comes to building reliable distributed systems. There are many reasons why actors are so attractive. On one hand, actors ensure that message-processing is serialized within each actor, preserving the familiar sequential programming model. On the other hand, programs written in the actor model are location-transparent, which is one of the prerequisites for scaling systems. Importantly, the actor model is sufficiently low-level to express arbitrary message protocols. However, composing these message protocols is the key to high-level abstractions, and it is difficult to reuse or compose message protocols with actors. Lack of simple composition is an obstacle to building complex systems.

The reactor model is the new answer to the challenges of distributed computing. This model simplifies protocol composition with first-class typed channels and event streams. In this talk, I will present the Reactors.IO framework which is based on the reactor programming model. I will compare the reactor and the actor models on concrete Scala programs. I will show specific obstacles for composition in the classic actor model, and how to overcome them. I will then show how to build reusable, composable distributed computing components in the new model.

Philipp Haller is an assistant professor in the theoretical computer science group at KTH Royal Institute of Technology, the leading technical university in Sweden. His main research interests are programming languages, type systems, concurrent, and distributed programming. Philipp is co-author of Scala's async/await extension for asynchronous computations, and one of the lead designers of Scala's futures and promises library. As main author of the book "Actors in Scala," he created Scala's first widely-used actors library. Philipp was co-chair of the 2013 and 2014 editions of the Scala Workshop, and co-chair of the 2015 ACM SIGPLAN Scala Symposium. Previously, he has held positions at Typesafe, Stanford University, and EPFL. He received a PhD in computer science from EPFL in 2010.

Thursday (16th Jun.) 16:50

Futures and promises in Scala are an integral part of asynchronous and concurrent code. Popular websites rely on futures for responsiveness in the presence of a large number of concurrent visitors. Furthermore, futures work well together with functional programming abstractions, thanks to the nature of future-based computations--single-assignment dataflow. However, futures also have important restrictions. For example, a future is completed with at most one result. Therefore, it is impossible to complete a future with a preliminary result and subsequently refine the result, for example, when more precise information becomes available. Finally, futures do not support resolving cyclic dependencies, instead resulting in deadlocks when such dependencies occur.

In this talk, I report on an extension of Scala's futures with lattices and quiescence, inspired by related features in Haskell's LVish package and the LVars programming model. In this extended model the state of a future is an element of a lattice. Importantly, multiple updates of its corresponding "promise" are possible where updates correspond to join operations of the lattice. In addition, resolution of cyclic dependencies is supported through a mechanism based on detecting quiescence of the underlying execution context. The programming model is currently being exploited in the context of OPAL, a new Scala-based, concurrent static analysis framework developed at Technische Universität Darmstadt. In the last part of my talk I will report on experimental results (performance, code complexity) using lattice-based futures for several large-scale static analysis tasks (e.g., purity analysis, bug finding) with OPAL.

Don't Fear the Implicits: Everything You Need to Know About Typeclasses

Daniel Westheide is a senior consultant at innoQ Germany and has been working with Scala and other functional programming languages since 2011. He is also the author of "The Neophyte's Guide to Scala" and has been mentoring junior Scala developers in various projects

Thursday (16th Jun.) 16:50

Developers who are new to Scala often shy away from coming into contact with implicits, and by extension, understanding typeclasses. In big organizations that have been adopting Scala at scale, you sometimes even come across hard rules that put a ban on the use of implicits because that language feature is considered to be too advanced and not understood by a lot of developers. On the other hand, implicits and typeclasses are used heavily not only by a lot of the most important Scala frameworks and libraries, but also in the standard library. Given the fact that it is so hard to evade them when writing real world Scala code, I would like to encourage developers adopting Scala to overcome their fear of implicits and instead embrace the typeclass pattern. In this talk, as an intermediate Scala developer, you will learn everything you really need to know about typeclasses: What they are good for and how they compare to what you are familiar with from object-oriented languages, when you should and should not use them, how the pattern can be encoded in Scala and how to write your own typeclasses, how to provide instances of typeclasses for your own or existing types, and how to do all of this with minimal boilerplate. Throughout the talk, you will see numerous examples of typeclasses used in the Scala ecosystem and the standard library, and you'll see that you don't need to know anything about category theory to benefit from embracing typeclasses.

Andy is a mathematician turned into a distributed computing entrepreneur.

Besides being a Scala/Spark trainer. Andy also participated in many projects built using spark, cassandra, and other distributed technologies, in various fields including Geospatial, IoT, Automotive and Smart cities projects.

Andy is also member of program committee of the O’Reilly Strata, Scala eXchange and Data Science eXchange and Devoxx events

Thursday (16th Jun.) 18:00

It was true that, until pretty recently, the language of choice to manipulate and to make sense out of the data for Data Scientists was mainly one of Python, R or Matlab. This lead to split in the communities and duplication of efforts in languages offering a similar set functionnaiity. Although, it was foreseen that Julia (for instance) could gather parts of these communities, an unexpected event happened: the amount of available data and the distributed technologies to handle them. Distributed technologies raised out of the blue by data engineer and most of them are using a convenient and easy to deploy platform, the JVM. In this talk, we’ll show how the Data Scientists are now part of an heterogeneous team that has to face many problems and have to work towards a global solution together. This is including a new responsibility to be productive and agile in order to have their work integrated into the platform. This is why technologies like Apache Spark is so important nowadays and is gaining this traction from different communities. And even though some binding are available to legacy languages there, all the creativity in new ways to analyse the data has to be done in scala. So that, the second part of this talk will introduce and summarize all the new methodologies and scientific advances in machine learning done Scala as the main language, rather than others. We’ll demonstrate that all using the right tooling for Data Scientists which is enabling interactivity, live reactivity, charting capabilities and robustness in Scala, something there were still missing from the legacy languages. Hence, the examples will be provided and shown in a fully productive and reproducible environment combining the Spark Notebook and Docker.

Martin is heading up Cake Solutions technical team in the US and is Apache Spark and Cassandra plugin for Akka Persistence contributor. Martin focuses on distributed systems, parallel and distributed approaches to data processing as well as machine learning, data mining in large volumes of data, and big data in general. These fields seem to be increasingly important in the industry and Martin has been promoting Scala, functional programming, and Reactive approaches as they provide very useful tools to solve these problems.

Thursday (16th Jun.) 18:00

Processing streaming data is becoming increasingly important in many areas. Scala and the Lightbend Reactive platform offer multiple solutions for processing streaming data, including Akka, Akka Streams and Apache Spark. This talk introduces the advantages and concepts of streaming data processing. It will mention differences between static data and data in motion and their usage as streaming data sources. The main goal of the presentation is detailed discussion of Akka Persistence Query and implementation of the stream production specification in Cassandra plugin for Akka Persistence (akka-persistence-cassandra) that the author participated in. Focus is on architecture and design considerations, implementation details, performance tuning and distributed system specifics such as correctness, efficiency, consistency, order, causality or failure scenario handling that are inherently part of the solution and apply to wide variety of distributed systems. Finally, other improvements to the Cassandra plugin for Akka Persistence project such as reusing the stream generation for non blocking asynchronous Akka Persistence recovery as well as application of the project and the discussed concepts to build modern reactive enterprise stream processing and asynchronous messaging distributed applications are presented.

Vaughn Vernon is a veteran software craftsman and thought leader in simplifying software design and implementation. He has been programming since 1983, consults and speaks internationally, and has taught his Implementing Domain-Driven Design classes to hundreds of developers around the globe. Vaughn is the author of the 2015 best-seller "Reactive Messaging Patterns with Actor Model--Applications and Integration in Scala and Akka" and best-seller "Implementing Domain-Driven Design",? both published by Addison-Wesley.

Thursday (16th Jun.) 18:00

What are microservices all about, and are they practical for your enterprise? How granular should a microservice be, and what approach should you use to determine the proper and appropriate boundaries between microservices? How can each microservice communicate with others in a distributed computing environment to correctly fulfill business objectives? How can my microservices adhere to the tenets of reactive software, being responsive, resilient, elastic, and message driven? Using Scala and Akka to implement microservices, this talk will demonstrate how you can implement microservices while answering all of the questions posed, and more. The talk will show you how to carefully craft microservices and to model the business domain within. You will experience advanced use of Akka throughout.

Alexander graduated from Saint-Petersburg State University in 2010, department of mathematics, he has a lot of prizes from international and regional mathematical competitions. In 2008, Alexander started to work for JetBrains, where he became Scala plugin for IntelliJ IDEA team leader. From 2012 he started teaching Scala in Saint-Petersburg Academic University

Thursday (16th Jun.) 18:00

Why do developers love IntelliJ IDEA? Because it takes care of all the routine and provides intelligent coding assistance. Once you master it, the productivity gains are quite surprising. In this session I'll show you the 30 hidden gems of IntelliJ IDEA that will help you become a more productive Scala developer.

Jamie specializes in innovation and technical strategy. As the CEO of Container Solutions, he is primarily responsible for the direction of the company and the well being of the team. He occasionally works with clients on tricky strategic problems, teaching executives how to "win" with technology.

Friday (17th Jun.) 09:00

This talk is about how software, particularly open-source software, is not only eating the world but eating capitalism itself - from the inside out.

Last summer I read Paul Mason's Postcapitalism and thereafter Rifkin's The Zero Marginal Cost Society. As I did this, and made notes, I came to see that our own open sourced products, such as Mini-Mesos and our ElasticSearch Framework, were threatening to disrupt our commercial competitors.

After thinking more about this, it's very obvious to see that open source software, and for example 3-D printing, is not only disrupting how we build things but is also undermining the core relationships of capitalism. If the marginal cost of a unit is free and so is its price, then there cannot be any profit. This one relationship undermines the very foundation of capitalism - something that Marx predicted.

This talk will look at capitalism, post capitalism, and as a case study will look at our ElasticSearch framework for Mesos. I will give insights into how companies can still stay relevant even when software is free - and I will do this by looking at how windmills utterly disrupted landowners in 11th century Yorkshire.

Miles has been doing stuff with Scala since 2004, currently with Underscore Consulting. His best known project, the Scala generic programming library shapeless, is the weapon of choice wherever boilerplate needs to be scrapped or arities abstracted over.

Friday (17th Jun.) 10:25

There has been a huge amount of activity around the Typelevel family of projects in the last eighteen months. It hasn't always been plain sailing, but the arrival of Cats on the scene last year marked the beginning an exciting period of collaboration among the Typelevel projects and reaching out to the wider Scala community that hadn't been possible before. Now, in mid-2016, we have had two Typelevel conferences and things are going from strength to strength. This talk will give a flavour of what has been going on: the collaborations between Algebra, Spire and Cats; between Cats and shapeless; between shapeless and scodec, doobie, ScalaCheck and Circe; and how all of this is feeding into the rebooted Typelevel Scala fork. It's also an open invitation to people right across the Scala spectrum to get involved in these projects and see what they can do for them in their own work.

Sidney has over 10 years experience in developing and architecting real-time and mission critical software systems across many industries ranging from financial services to manufacturing. He likes challenging traditional constraints and applying the latest R&D and technologies in elegant yet reliable solutions to real-world problems. At Atlassian, he is an Architect within Engineering Services, a group tasked with breaking apart monoliths in Atlassian Cloud into massively scalable and highly available microservices; immutable data, type-safe code, and idempotent operations are his battle cries in this mammoth war!

Friday (17th Jun.) 10:25

At Atlassian, we are in the midst of an architectural shift towards microservices to provide a scalable and flexible platform for our cloud services. In order to achieve this safely with no risk of losing customer data, we are investigating ‘event sourcing’ for representing our domain models i.e. capturing streams of immutable events to represent data instead of the traditional update-in-place paradigm. With event sourcing, old versions of data can be easily restored and audit trails are available by default to help with debugging. In combination with the command-query responsibility separation (CQRS) pattern, we can seamlessly bring online new functionality requiring schema changes or new query patterns, and re-architect later for more scale simply by replaying and reinterpreting events into new microservices and ephemeral data stores.

In this talk, we will describe in detail what event sourcing and CQRS are, why we are using this approach, and a walk through of our implementation in Scala (leveraging scalaz-streams) in an AWS environment using DynamoDB, Kinesis and Lambdas.

Goals:

By the end of the presentation, the audience should:

have an understanding of the benefits of event sourcing

have an understanding of how event sourcing works

be able to identify how event sourcing can be applied to their environment

As well as experience in software engineering and participating in several conferences I have 5 years of experience in teaching (math and programming courses for high school and university students, but mostly in russian).

Friday (17th Jun.) 10:25

The purpose of this talk is to introduce one to differences between Scala and Dotty, as well as to support for Dotty in Scala plugin for IntelliJ IDEA. First of all we will look at the differences between the type systems. Besides that one will learn which aspects of language support in IntelliJ IDEA have already been implemented or will be implemented in the near future. Among other things the talk will cover the build process of the project, the incremental compilation.

Technology Evangelist by day, Evil Magician by night, José actually comes from the Perl world (but he comes in peace).

He enjoys being on stage and has a passion for close-up magic and card cheating; feel free to ask him for a demonstration of any or both; he’ll either show you a miracle or why you should never play cards with people you don't know.

José works at the evangelism team at Codacy, an Automated Code Review platform.

Friday (17th Jun.) 10:25

A collection of horror stories that derive from a lack of best practices. Everything will be covered: How to make companies lose money, shutdown systems and disable payments (or worse) just from overlooking simple procedures that "take too much time" or "cost too much" to implement. We may even dim the lights at certain points in the presentation.

Konrad is a late-night passionate dev living by the motto "Life is Study!". His favourite discussion topics range from distributed systems to capybaras. He has founded and run multiple user groups (Java, Scala, Computer Science, ...), and is part of program committees of awesome conferences such as GeeCON and JavaOne SF. Other than that, he's a frequent speaker on distributed systems and concurrency topics at conferences all around the world. In those rare times he's not coding, he spreads the joy of computer science by helping local user groups and whitepaper reading clubs. He also holds number of titles, the most fun of which being Java One RockStar 2015.

Friday (17th Jun.) 11:35

In this talk we'll have a deeper look into Akka Streams (the implementation) and Reactive Streams (the standard). The term streams has been recently pretty overloaded, so we'll disambiguate what streams are and what they aren't. We'll dive into a number of real life scenarios where applying back-pressure helps to keep your systems fast and healthy at the same time. We'll mostly focus on the Akka Streams implementation, but the general principles apply to any kind of asynchronous programming.

Dean Wampler is the Big Data Architect at Lightbend and specializes in the application of Functional Programming principles to “Big Data” applications, using Hadoop and alternative technologies.

Friday (17th Jun.) 11:35

Spark is implemented in Scala and its user-facing Scala API is very similar to Scala's own Collections API. The power and concision of this API are bringing many developers to Scala. The core abstractions in Spark have created a flexible, extensible platform for applications like streaming, SQL queries, machine learning, and more.

Scala's uptake reflects the following advantages over Java:

A pragmatic balance of object-oriented and functional programming.

An interpreter mode, which allows the same sort of exploratory programming that Data Scientists have enjoyed with Python and other languages. Scala-centric "Notebooks" are also now available.

A rich Collections library that enables composition of operations for concise, powerful code.

Tuples are naturally expressed in Scala and very convenient for working with data.

Scala idioms lend themselves to the construction of small domain specific languages, which are useful for building libraries that are concise and intuitive for domain experts.

There are disadvantages, too, which we'll discuss.

Spark, like almost all open-source, Big Data tools, leverages the JVM, which is an excellent, general-purpose platform for scalable computing. However, its management of objects is suboptimal for high-performance data crunching. The way objects are organized in memory and the subsequent impact that has on garbage collection can be improved for the special case of Big Data. Hence, the Spark project has recently started a project called "Tungsten" to build internal optimizations using the following techniques:

Custom data layouts that use memory very efficiently with cache-awareness.

Manual memory management, both on-heap and off-heap, to minimize garbage and GC pressure.

Denys is a research assistant at LAMP/EPFL, and has previously worked on off-heap memory, quasiquotes and macros for Scala.

Friday (17th Jun.) 11:35

Scala has historically been a JVM-centric programming language. The situation has started to change with appearance of the Scala.js that opened a door of front-end development to Scala programmers. This talk will expand the horizons of Scala even more. We’re going to announce a new ahead-of-time compiler and lightweight managed runtime designed specifically for Scala.

Noel has been using Scala for some six years and functional programming for about 15 years. He teaches Scala, helps teams adopt Scala successfully, writes books about Scala, and is an experience conference speaker.

Friday (17th Jun.) 11:35

What does it mean to think like a functional programmer? Why is it so hard to adopt functional programming if you are steeped in OO? In this talk I'll address these questions, using ideas from the philosophy of science. The talk will give new insight into the process of becoming productive in Scala, and the mindset needed to aid this process. Using the framework of ""The Structure of Scientific Revolutions"" we can see programming as undergoing three revolutions: structured programming, object-oriented programming, and now, functional programming. Each revolution brings with a new paradigm---basic assumptions and values that make moving across paradigms difficult to achieve. In this talk I'll describe the paradigm of the functional programmer: what is valued and, what is considered good code. I will contrast this to the paradigm of the object-oriented programmer, and finally consider if it is possible to reconcile these two approaches to programming.

Flavio W. Brasil is a software engineer at Twitter, working on the team responsible for maintaining the high-performance tweet backend service. He is an experienced developer that has specialized in Scala development and performance analysis on the JVM over the five last years. He has experience with a wide range of technologies, from the PalmOS's network stack in C to billing systems using Smalltalk, contributing to several open source projects and creating how own, including the Activate Persistence Framework, a high-performance persistence solution, and Clump, a free monad that addresses the problem of knitting together data from multiple sources in an elegant and efficient way.

This talk will give an overview of the "Scylla and Charybdis" theorem that supports the query compilation, present the Quoted Domain Specific Language (QDSL) approach used by the API, and give a quick view of Quill's functionality.

Deep learning engineer at Skymind with a passion for working on machine-learning problems at scale and AI. Previous experience includes work with data science and engineering at Change.org and a comprehensive consulting career.

Andy is a mathematician turned into a distributed computing entrepreneur.

Besides being a Scala/Spark trainer. Andy also participated in many projects built using spark, cassandra, and other distributed technologies, in various fields including Geospatial, IoT, Automotive and Smart cities projects.

Andy is also member of program committee of the O’Reilly Strata, Scala eXchange and Data Science eXchange and Devoxx events

Friday (17th Jun.) 13:20

Deep Learning is taking data science by storm. Unfortunately, most existing solutions aren’t particularly scalable. In this talk, we will show how we can easily implement a Spark­ ready version one of the most complex deep learning models, the Long Short­ Term Memory (LSTM) neural network, widely used in the hardest natural language processing and understanding problems, such as automatic summarization, machine translation, question answering and discourse. We will also show an LSTM demo with interactive, real­time visualizations using the Spark Notebook and Spark Streaming.

Sébastien Doeraene is a compiler/runtime systems hacker, and the author of Scala.js. He is a Ph.D. student at EPFL in the programming methods laboratory (LAMP) led by Martin Odersky, also known as the Scala team. He holds bachelor and master degrees in computer science engineering from Université Catholique de Louvain in Belgium. When he is not busy coding, he sings in choirs and a cappella groups, or rides around on a unicycle.

Friday (17th Jun.) 13:20

Have you ever dreamed of understanding how Scala.js works under the covers? Knowing your way around its codebase? This is your chance! This advanced talk will take you to the deepest parts of Scala.js: its compiler, its Intermediate Representation, its linker and its optimizer. Through live coding within the Scala.js codebase, we will explain how the Scala.js compilation pipeline works, and how you can modify it. After this talk, you will understand the Scala.js IR and be able to manipulate it. And you'll be ready to write your custom Linker Plugins! Attendees are expected to be proficient in Scala and have some compiler background (e.g., knowing what an AST is is vital).

Petr is a Software Engineer who specialises in the design and implementation of highly scaleable, reactive and resilient distributed systems. He is a functional programming and open source enthusiast and has expertise in the area of big data and machine classification techniques. Petr participates in the whole software delivery life-cycle: from requirement analysis & design through to maintaining systems in production. During his career, he has worked for various companies from start-ups to large international corporations. Technically, Petr is SMACK (Spark, Mesos, Akka, Cassandra, Kafka) evangelist, enjoys working with Akka, and has deep knowledge of toolkit’s features. Petr is also certified Spark Developer.

Friday (17th Jun.) 13:20

In this talk we are going to discuss various state of the art open-source distributed streaming frameworks, their similarities and differences, implementation trade-offs, their intended use-cases and how to choose between them. I’m going to focus on the popular frameworks including Spark Streaming, Storm, Samza and Flink. In addition, I’m going to cover theoretical introduction, common pitfalls, popular architectures and many more. The demand for stream processing is increasing. Immense amounts of data has to be processed fast from a rapidly growing set of disparate data sources. This pushes the limits of traditional data processing infrastructures. These stream-based applications include trading, social networks, Internet of things or system monitoring, are becoming more and more important. A number of powerful, easy-to-use open source platforms have emerged to address this. My goal is to provide comprehensive overview about modern streaming solutions and to help fellow developers with picking the best possible decision for their particular use-case. This talk should be interesting for anyone who is thinking about, implementing or have already deployed streaming solution.

Holden Karau is a software development engineer and is active in open source. She a co-author of Learning Spark & Fast Data Processing with Spark and has taught intro Spark workshops. Prior to IBM she worked on a variety of big data, search, and classification problems at Alpine, DataBricks, Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a Bachelors of Mathematics in Computer Science. Outside of computers she enjoys dancing & playing with fire.

Friday (17th Jun.) 14:30

This session will cover our & community experiences scaling Spark jobs to large datasets and the resulting best practices along with code snippets to illustrate.

The planned topics are:

Using Spark counters for performance investigation

Spark collects a large number of statistics about our code, but how often do we really look at them? We will cover how to investigate performance issues and figure out where to best spend our time using both counters and the UI.

Working with Key/Value Data

Replacing groupByKey for awesomeness

groupByKey makes it too easy to accidently collect individual records which are too large to process. We will talk about how to replace it in different common cases with more memory efficient operations.

Effective caching & checkpointing

Being able to reuse previously computed RDDs without recomputing can substantially reduce execution time. Choosing when to cache, checkpoint, or what storage level to use can have a huge performance impact.

Considerations for noisy clusters

Functional transformations with Spark Datasets

How to have the some of benefits of Spark’s DataFrames while still having the ability to work with arbitrary Scala code

Generalist with a nerdy streak and a weakness for solving problems with software. Currently into Scala (a lot). Among many other things Mathias is the original author of spray.io, which was acquired by Lightbend in 2013.

As a long-time and active member of the Java and Scala open-source communities he is especially excited about everything performance-, scalability- and web-related.

Friday (17th Jun.) 14:30

Since its inception more than 2 years ago the effort for establishing a standard protocol for asynchronous stream processing under the name "Reactive Streams" (RS) has received a lot of attention. Many users and organizations are excited by this powerful new abstraction for defining and scaling pipelined processing logic in a fully asynchronous, non-blocking and generally reactive fashion.

Typically applications built on RS thereby rely on an RS infrastructure implementation to provide the general building blocks for defining arbitrarily complex business logic as a series of stream transformations. An RS infrastructure implementation also forms the "glue" that's required for interfacing with RS-compatible domain adapters (e.g. for databases, HTTP APIs and messaging systems).

While the number of RS-compatible domain adapters has been rising steadily over the course of the last year the range of available RS infrastructure implementations is still limited.

For Scala users there is currently only a single one, akka-stream, which is solidly engineered towards a particular set of design goals.

In order to further explore the design space of the still young Reactive Streams domain another fully-featured Reactive Streams infrastructure toolkit for Scala, called "Swave", has been built from scratch with a clear focus on maximum performance, a simple and powerful API as well as minimal dependencies.

This talk will introduce you to the project, its general design approach, feature set and core implementation choices as well as basic benchmark and performance figures.

We'll contrast with other RS implementations and highlight pros and cons from a user's perspective.

So the next time you are faced with the question of which Reactive Streams implementation to use you'll have one more choice to pick from.

And even if you can't decide you're still good! After all, the "Reactive Streams" label means: Interoperability for the win!

The story of better metaprogramming in Scala has started in early 2013, after we released macros 1.0 and got initial feedback from early adopters. This talk will provide an illustrated guide through the history of the improvements that we did and are planning to do to metaprogramming. We will see what worked, what didn't, and what we're excited about for the future of macros in scala.meta and Dotty.

Dave is a Scala developer and consultant at Underscore. He has been working with Scala since 2010 and functional programming for over a decade. He has spoken at numerous conferences worldwide, including Scala Days, Scala Exchange, and ICFP.

Friday (17th Jun.) 14:30

Getting to grips with a new programming language can be daunting, especially if it requires learning a new discipline such as functional programming. In this talk, Sofia and Dave will provide a step-by-step guide to adopting and mastering Scala from two different perspectives: the graduate developer and the visiting consultant.

Sofia Cole is a developer at YOOX NET-A-PORTER GROUP with eigtheen months' experience working with Scala. Dave Gurnell is a consultant at Underscore with over a decade's experience writing functional code. Between them, they will walk you through their experiences at YOOX NET-A-PORTER GROUP migrating from a legacy Perl monolith to a Scala microservice architecture.

Topics covered will include:

How to sell Scala within your organization.

What benefits do managers and developers care about?

A step-by-step guide for navigating the sea of Scala and FP concepts, without losing yourself in a monadic storm

Shared joys and pains from different developers from disparate backgrounds, and how to reconcile them

The tactical and targeted use of hack days, and functional programming workshops, including a step-by-step guide for running coding dojos

A guide to pair programming from two different perspectives: peer-pairing and teacher-student pairing

This will be an entertaining and enlightening alternative take on the Scala adoption story. The speakers absolutely promise not to quarrel on-stage (unless the situation particularly demands it).

Bill Venners is president of Artima, Inc., publisher of Scala books and Scala developer tools, and co-founder of Escalate Software, LLC, provider of Scala training. He is the lead developer and designer of ScalaTest, an open source testing tool for Scala and Java developers, and Scalactic, a library of utilities related to quality. Bill is also coauthor with Martin Odersky and Lex Spoon of the book, Programming in Scala.

Friday (17th Jun.) 15:40

In ScalaTest 3.0's new async testing styles, tests have a result type of Future[Assertion]. Instead of blocking until a future completes, then performing assertions on the result, you map assertions onto the future and return the resulting Future[Assertion] to ScalaTest. The test will complete asynchronously when the Future[Assertion] completes. This non-blocking way of testing requires a very different mindset and different API. In this talk Bill Venners will show you how async testing was integrated into ScalaTest, and explain what motivated the design decisions. He'll show you how to use the new features, and suggest best practices for async testing on both the JVM and Scala.js.

Pathikrit is a Principal at Coatue Management where he enjoys using Scala to solve complex problems in finance. He is also the author and maintainer of many popularly used open source Scala libraries like better-files and scalgos. Pathikrit's interests include functional programming, databases and algorithms.

Friday (17th Jun.) 15:40

Doing I/O in Scala (and Java) involves either invoking some magic "FileUtil" or browsing through StackOverflow. In this talk, we will introduce better-files (https://github.com/pathikrit/better-files) - a thin wrapper around Java NIO to enable simple, safe and sane I/O in Scala. We will also discuss problems with designing an I/O library that would make everyone happy and different schools of thoughts e.g. monadic vs non-blocking vs effect-based APIs

Vlad is a member of the Scala Team at EPFL and represents the Scala community at the Oracle-sponsored Project Valhalla Expert Group. He worked on several projects aimed at improving Scala performance, such as miniboxing (scala-miniboxing.org) and data-centric metaprogramming (scala-ildl.org).

Friday (17th Jun.) 15:40

We can compose data structures like LEGO bricks: a relational employee table can be modelled as a `Vector[Employee]`, where we use the standard `Vector` collection. Yet, few programmers know just how inefficient this is: iterating requires dereferencing a pointer for each employee and a good part of the memory is occupied by redundant bookkeeping information.

Data-centric metaprogramming is a technique that allows developers to tweak how their data structures are stored in memory, thus improving performance. For example, we can use `Vector[Employee]` throughout the program, despite its inefficiency. Then, when performance starts to matter, we simply instruct the compiler how to store the `Vector[Employee]` more efficiently, using separate arrays (or vectors) for each component. In turn, the compiler uses this information to optimize our code, automatically switching to the improved memory layout for the `Vector[Employee]`. This makes premature optimization redundant: we write the code using our favorite abstractions and, only when necessary, we tune them after the fact.

There are many usecases for data-centric metaprogramming. For example, applied to Spark, it can produce 40% speedups. The Scala compiler plugin that enables data-centric metaprogramming is developed at github.com/miniboxing/ildl-plugin and is documented on scala-ildl.org.

Markus is a Developer Advocate at Red Hat and focuses on JBoss Middleware. He is working with Java EE servers from different vendors since more than 14 years and talks about his favorite topics around Java EE on conferences all over the world. He has been a principal consultant and worked with different customers on all kinds of Java EE related applications and solutions. Beside that he has always been a prolific blogger, writer and tech editor for different Java EE related books. He is an active member of the German DOAG e.V. and it's representative on the iJUG e.V. As a Java Champion and former ACE Director he is well known in the community.

Friday (17th Jun.) 15:40

Building a complete system out of individual Microservices is not as simple as we're being told. While Microservices-based Architecture continues to draw more and more attention we're also starting to learn about the trade-offs and drawbacks. Individual Microservices are fairly easy to understand and implement, but they only make sense as systems, and it is in-between the services that the most challenging (and interesting) problems arise—here we are entering the world of distributed systems.

As we all know, distributed systems are inherently complex and we enterprise developers have been spoiled by centralized servers for too long to easily understand what this really means. Just slicing an existing system into various REST services and wiring them back together again with synchronous protocols and traditional enterprise tools—designed for a monolithic architectures—will set you up for failure.

This talk is going to distill the essence Microservices-based systems and then introduce you to a new development approach to Microservices which gets you started quickly with a guided, minimalistic approach on your local machine and supports you every single step on the way to a productive scaled out Microservices-based system composed of hundreds of services. At the end of this talk, you've experienced first hand how creating systems of microservices on the JVM is dead-simple, intuitive, frictionless and last and more importantly—a lot of fun!

Sébastien Doeraene is a compiler/runtime systems hacker, and the author of Scala.js. He is a Ph.D. student at EPFL in the programming methods laboratory (LAMP) led by Martin Odersky, also known as the Scala team. He holds bachelor and master degrees in computer science engineering from Université Catholique de Louvain in Belgium. When he is not busy coding, he sings in choirs and a cappella groups, or rides around on a unicycle.

Jon has been involved in the Scala community for over a decade, having launched the first commercial and open-source Scala software back in 2005. Since then, he has successfully deployed Scala projects into small, medium and large businesses, and UK government, but is best known these days for his work on Rapture and as the organizer of the annual Scala World conference.

Jon has spoken on a variety of topics at dozens of Scala conferences and user groups around the world over the last five years.

Adriaan leads the Scala team at Lightbend. He has worked on the Scala compiler since 2007, when he implemented support for type constructor polymorphism. While Adriaan was at EPFL, he focussed mostly on the type checker's implementation and the underlying theory, though he briefly ventured a bit later into the compilation pipeline when he rewrote the pattern matcher in 2.10. Since joining Lightbend, Adriaan has been trying to make all aspects of Scala easier to contribute to, by modularizing the library, simplifying the build, improving our infrastructure, polishing our process docs, etc. Please send Adriaan(@adriaanm) your thoughts on how we can make your life as a contributor easier and more pleasant!

Heather is a research scientist and the executive director of the Scala Center at EPFL in Lausanne, Switzerland. She recently completed her PhD in EPFL’s School of Computer and Communication Science under Professor Martin Odersky, where she contributed to the now-widespread programming language, Scala. Heather’s research interests are at the intersection of data-centric distributed systems and programming languages, with a focus on transferring her research results into industrial use.

She now oversees the newly-established Scala Center, whose goal is to jointly to spearhead community open-source development on Scala, and to improve education surrounding Scala through a series of MOOCs.

Bill Venners is president of Artima, Inc., publisher of Scala books and Scala developer tools, and co-founder of Escalate Software, LLC, provider of Scala training. He is the lead developer and designer of ScalaTest, an open source testing tool for Scala and Java developers, and Scalactic, a library of utilities related to quality. Bill is also coauthor with Martin Odersky and Lex Spoon of the book, Programming in Scala.

Friday (17th Jun.) 16:50

Join und in the keynote room for the closing panel with Lukas Rytz, Adriaan Moors, Sébastien Doeraene, Jason Zaugg, Heather Miller and Bill Venners. The closing panel will be moderated by Jon Pretty.

Conference

Training

Training

Want to get the most out of Scala Days Berlin? Register for an in-person training course before the Scala Days conference. All trainings are two-day trainings and will take place Monday and Tuesday June 13-14, 2016. The courses are designed for developers of all levels of proficiency with the Lightbend Reactive Platform.

All trainings will take place at:

Ramada Berlin Alexanderplatz

Karl Liebknecht Strasse 32,

Berlin, 10178 DE

Advanced Scala and Akka in Practice Training Workshops are SOLD OUT. Please contact [email protected] if you want to be on the waiting list for one of these training courses

Fast Track to Scala (SOLD OUT)

This course is designed to give experienced developers the know-how to confidently start programming in Scala. The course ensures you will have a solid understanding of the fundamentals of the language, the tooling and the development process as well as a good appreciation of the more advanced features. If you already have Scala programming experience, then this course could be a useful refresher, yet no previous knowledge of Scala is assumed.

Objectives

After having participated in this course you should:

Be a competent user of Scala

Know and be able to apply the functional programming style in Scala

Know how to use fundamental Scala tools

Be confident to start using Scala in production environments

Audience

Application developers wishing to learn Scala

Prerequisites

Students taking this course should have:

Experience with Java (preferred) or another object-oriented language

No previous Scala knowledge is required

Setup Requirements

To complete the exercises in this course, students need to install the following before class:

Trainer: Dr. Andreas Schröder
Andreas is a Lightbend certified instructor for Fast Track to Scala, and works at codecentric AG as IT consultant. He holds a Dr. rer. nat. in computer science from Ludwig-Maximilians-Universität Munich. After doing some more research in post-doc, he decided to enter the trenches of the software development industry. He specializes in continuous delivery, devops, and (last but not least) functional programming.

When working on client projects, he enjoys championing clean code and functional principles - bringing back the fun, enthusiasm and sanity to software development teams. For the last year, he has been working on microservice architectures written in Scala with Play and Akka, and microservice infrastructures based on Amazon AWS.

In his academic past, Andreas has written over 30 scientific publications with over 280 citations on automated code refactoring, software engineering of service-oriented architectures and collaborating ensembles, internet of things, and constraint-based query language semantics.

Apache Spark Workshop (SOLD OUT)

This two-day course is designed to teach developers how to implement data processing pipelines and analytics using Apache Spark. Developers will use hands-on exercises to learn the Spark Core, SQL/DataFrame, Streaming, and MLlib (machine learning) APIs. Developers will also learn about Spark internals and tips for improving application performance. Additional coverage includes integration with Mesos, Hadoop, and Reactive frameworks like Akka.

Objectives

After having participated in this course you should:

Understand how to use the Spark Scala APIs to implement various data analytics algorithms for offline (batch-mode) and event-streaming applications

Understand Spark internals

Understand Spark performance considerations

Understand how to test and deploy Spark applications

Understand the basics of integrating Spark with Mesos, Hadoop, and Akka

Audience

Developers wishing to learn how to write data-centric applications using Spark.

Prerequisite

Experience with Scala, such as completion of Fast Track to Scala course

Experience with SQL, machine learning, and other Big Data tools will be helpful, but not required.

Trainer: Matthias Niehoff
Matthias Niehoff works as an IT-Consultant for the codecentric AG in Karlsruhe. He works on big data and streaming applications mainly using Apache Cassandra and Apache Spark. Matthias shares his experiences on conferences, meetups and usergroups. Furthermore he is a Lightbend certified trainer for Spark.

Advanced Scala (SOLD OUT)

Please contact [email protected] if you want to be on the waiting list for this training course

If you already have programming experience with Scala and want to understand its advanced features, this course is for you. It is designed to help developers fully understand topics such as advanced object-functional programming, the power of Scala's type system, implicits, etc. The course also covers how to leverage these features to create well-designed libraries or DSL's utilizing proven best practices.

Objectives

After having participated in this course you should:

Understand all aspects of the object-functional approach

Know and be able to apply advanced features of Scala's type system

Fully understand implicits and type classes

Be confident about creating libraries and DSLs with Scala

Audience

Application or library developers wishing to master Scala

Prerequisites

Students taking this course should have:

Full understanding of all concepts taught in Fast Track to Scala

At least 2 months of full-time hands-on development with Scala

Setup Requirements

To complete the exercises in this course, students need to install the following before class:

Trainer: Markus Hauck
Markus Hauck is IT Consultant at codecentric. His current passions are functional programming and the usage of modern type systems to guarantee safe code.

Akka in Practice (SOLD OUT)

Please contact [email protected] if you want to be on the waiting list for this training course

Akka in Practice is a two-day course to teach experienced developers how to use Akka to build real-world applications. This course not only covers the features provided by the core Akka library, but also introduces Akka Streams and Akka HTTP, two new modules which are extremely important for interacting with an Akka based system. As we strongly believe that the most efficient and lasting way of learning is learning by doing, Akka in Practice has a strong focus on hands-on exercises which are used to build an Akka based web application step by step.

Prerequisite

Attendees have to bring their own laptop with Java 8 installed

We use Scala for code examples and exercises, hence attendees have to feel familiar with programming in Scala

Program

Introduction

Creating Actors

Testing Actors

Communication

Lifecycle and Supervision

Futures and Ask Pattern

Akka Streams

Akka HTTP

Outlook on Akka Cluster

Trainer: Heiko Seeberger
Heiko Seeberger is Fellow at codecentric and an internationally renowned expert on Scala and Akka. He has more than 20 years of experience in consulting and software development. Heiko tweets under @hseeberger and blogs under heikoseeberger.de.

Conference (2.5 days)

Conference + 2 training days (4.5 days)

2 Training days only

Before Feb 24

Before April 20

After April 20

€1000

€1200

€1250

The prices are exclusive of VAT. All tickets are subject to 19% VAT.

Please note that the registration fee is non-refundable. Once a registration has been made and the confirmation email has been sent out, the price is set and can not be changed or adjusted. Price reductions (promotion codes) are always relative to the list price. It is not possible to combine different price reductions.

Registration includes conference materials, t-shirt, bag, and food during the conference and at all social events. Training workshop attendees will receive training materials, breakfast and lunch and an afternoon snack.

In case there is not a sufficient number of registrations for a particular workshop, we reserve the right to cancel it one month prior to the conference. If a workshop is cancelled, registered attendees will be contacted directly and they can register for another workshop or get a refund.

If you need a hotel during your stay in Berlin you can book and receive a discounted and price match guarantee Scala Days conference rate clicking "book hotel" below.

Sponsors

Interested in being a sponsor at Scala Days Berlin? Contact Geeta Schmidt.

Platinum

Gold

Silver

Produced by

Code of Conduct

Our Code of Conduct is inspired by the kind folks at NE Scala, who adopted theirs from PNW Scala. We think they both nailed it.

Nobody likes a jerk, so please show respect for those around you. This applies to both in-person and online behavior.

Scala Days is dedicated to providing a harassment-free experience for everyone, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, or religion (or lack thereof). We do not tolerate harassment of participants in any form.

All communication should be appropriate for a technical audience, including people of many different backgrounds. Sexual language, innuendo, and imagery is not appropriate for any conference venue, including talks.

Participants violating these rules may be asked to leave without a refund at the sole discretion of the organizers.

Crew Volunteers

All Crew Volunteers will be required to work on Wednesday June 15th from 12:00 PM - 6:30 PM as well as another shift of four hours on Thursday June 16th or Friday June 17th. In return for helping with the conference, Crew Volunteers will have free access to the conference and all social events. Registration to be crew is closed.

If you are an academic wanting to participate fully in Scala Days Berlin you can send an email with a copy of your student ID to Dajana Guenther, [email protected], and in return you will receive a discouted rate for conference attendance.