Agile Contracts: Building Trust

The Fixed Price contract continues to be the most common means of
defining contracts for software development projects, despite the
amount of evidence suggesting that such contracts commonly contribute
to project failure. Schedule and cost overruns, expensive change
control procedures, and a lack of trust between customer and supplier
are typical war stories.

Essentially the fixed price contract is based on the same assumptions
about a fully predictable, easily planned future as the waterfall
model. The Agile movement has gone a long way to breaking down such
assumptions in the last decade, so that agile software delivery is now
firmly in the mainstream. However, suppliers still often face a
struggle to engage customers in a way which minimises contract
negotiation and allows real collaboration to begin.

In this session we will look at a number of possible contract models
which support an agile way of working to a greater or lesser degree,
and explore the issues surrounding the procurement process. Can a
contract form the basis of a relationship built on trust and mutual
benefit?

Agile Teams: Value-Focused, Values-Driven

No teams work in isolation. All teams are part of larger ecosystems, which continually induce opposing stressors into the system. If Agile is fundamentally about delivering value to all stakeholders, the obvious question is how teams focus themselves to provides this goal for the greater good. This session takes a much more philosophical and holistic look at the deeper values that should be cultivated in group that could be bestowed the Agile tag. It will also explore some of the core rigidities and capabilities found in organisations and how this affects the magic road to Agile.

Allocators for Shared Memory in C++03, C++11, and Boost

C++ allocators are rarely looked at but for using shared memory as a
means of interprocess communication they spring to mind. For a long time
this was a good idea in theory but in practice not until C++11.
Explaining allocators and their use case “shared memory” is the
focus of the presentation. Topics of the talk are the C++03 and C++11
allocators and the Boost.Interprocess library. It is shown how to
implement a custom allocator in C++03 and C++11 and how to employ Boost
in order to place containers like vector and set into shared memory.

The talk commences with the definition and limits of the C++03
standard allocators and why it does not permit portable use of custom
allocators for shared memory segments. The presentation continues with a
discussion of the Boost.Interprocess allocator for shared memory.
Examples will show the use of Boost containers in shared memory. Based
on this knowledge the talk presents the C++11 allocators, how to
implement a custom one, and how the latest standard enables new use
cases for custom allocators. Finally the shared memory examples are
reconsidered for comparing the old and new C++ standard.

An Exploration of the Phenomenology of Software Development.

First let me say that this will not be a presentation by someone with
all the answers. It is to be a sharing of the more subtle aspects of
software development from a practitioner's standpoint.

The field has seen an unprecedented transition of a significant part
of humanity from physical work to an internally oriented environment
dealing with non-physical constructs.

Yet just how aware are we of this fact?
How do we see the development of the skills in dealing with this inner
world’s dynamic?
How can we communicate about it in any way that makes sense?

I consider that the future positive, and human, progress of software
development is dependent upon dealing with this ‘elephant in the room’
that few in our discipline (with some notable exceptions) want to talk
about. I also propose that it cannot be done from a singularly
results-oriented viewpoint.

Gathering insights from the likes of Wolfgang Goethe, Henri Bortoft,
Richard Glass and also the recent work of Christopher Alexander, the
speaker welcomes those who have the courage to literally take their
hearts into their hands and dive into this inner world from a
practitioner’s viewpoint.

Attendees will be invited to reflect about the inner processes of
software development by considering what should be the familiar ground
of various simple software design problems.

PS: This will be my first time at presenting at ACCU (gulp) although have
been to a fair number of the conferences in the past.

Designing one library is hard; designing an open-ended collection of interoperable libraries is
harder. Partitioning functionality across multiple libraries presents its own unique set of challenges:
Functionality must be easy to discover, redundancy must be eliminated, and interface and contract
relationships across components and libraries should be easy to explore without advanced IDE
capabilities. Further, dependencies among libraries must be carefully managed – the libraries must
function as a coherent whole, defining and using a curated suite of vocabulary types, but clients should
pay in compile time, link time, and executable size only for the functionality they need.

Creating a unified suite of multiple interoperable libraries also has many challenges in common with
creating individual ones. The software should be easy to understand, easy to use, highly performant,
portable, and reliable. Moreover, all of these libraries should adhere to a uniform physical structure, be
devoid of gratuitous variation in rendering, and use consistent terminology throughout. By achieving
such a high-level of consistency, performance, and reliability across all of the libraries at once, the local
consistency within each individual library becomes truly exceptional. Moreover, even single-library
projects that leverage such principles will derive substantial benefit.

There are many software methodologies appropriate for small- and medium-sized projects, but most
simply do not scale to larger development efforts. In this talk we will explore problems associated with
very large scale development, and the cohesive techniques we have found to address those problems
culminating in a proven component-based methodology, refined through practical application at
Bloomberg. The real-world application of this methodology – including three levels of aggregation,
acyclic dependencies, nominal cohesion, fine-grained factoring, class categories, narrow contracts, and
thorough component-level testing – will be demonstrated using the recently released open-source
distribution of Bloomberg’s foundation libraries.

Auto - a necessary evil?

C++11 repurposed an old keyword “auto” to allow you to declare a variable
with a deduced type.

For example “auto i = 10;” declares a variable “i” of type “int”.

There were several motivations for providing a new meaning for this keyword.
However, like many things, once a new feature is provided people will find
creative things to do with it
that may well go beyond the original expectations of the people who first
made the proposal.

In this talk I will look at the times when you *must* use auto (which makes
it 'necessary'.)
I will then look at the interesting cases when you *might* use it - or
perhaps abuse it (which makes it 'evil').
I also expect to cover some places when you may *not* use auto and a few
'gotchas' where you may not get what you expect!

I will try to identify the strengths and weaknesses of using auto so that you
can make informed decisions in your own codebases about when auto should –
and should not – be used.

The talk will be mostly focused on C++ although I may compare and contrast
with similar facilities in other languages, such as 'var' in C#.

Bad test, good test

Foundational unit testing techniques are often taken for granted, but
are an essential underpinning for delivering maintainable software.
The tests need to assist software development not hinder it, and to
that end need to be flexible, robust, comprehensible and performant.
If you find yourself fighting your test suite, then something is
wrong.

In this session, we re-examine the basics of a unit test. We will work
through a number of examples with continuous input from attendees.
Each example will start with a test of questionable quality, and we
will work through the issues till we're happy that it's as good as we
can get it.

Although this session is not specifically about the testability of
software, this will necessarily be touched upon as we consider some
test cases. Examples will be written in several common languages, but
knowledge of all (or any) of them is not a pre-requisite.

Becoming a Better Programmer

We all want to be better programmers, right?
This entertaining session will help you to work out how.
With the help of a number of special guests, we will provide a series of practical, simple methods to become a better programmer. We'll gain some real insights from respected developers.
There will be plenty of hand-waving and jumping, a little philosophy, and some twists.
Be the best programmer you can!

C# is a doddle

C# is a simple language that has none of the flaws that continually
bite at other - particularly C++ - programmers. With automatic memory
management, a system of generics that can be understood by mere
mortals, a unified type system with no holes in it, and no complicated
name lookup schemes, C# is easy. Right?

C++11 The Future is Here

C++ allows you to write better code faster. By “better” I mean maintainable code with fewer errors than
was possible in C++98. C++11 allows you to write less code for a given problem and have it run faster.
By “faster” I mean getting real-world code to run as fast as or faster than hand-tuned C, as fast as or
faster than code written in any modern language I know of, sometimes much faster. This can be done
today, using currently shipping compilers.

But most people are stuck in a 1970s or 1980s mindset, can we catch up to C++11? Worse, many people
are stuck in a mess of “legacy code” creating a framework of constraints that discourage the use of 21 st
century facilities.

My aim in this talk is not to enumerate the C++11 features or to go into great technical detail on a
select feature. My aim is to show how the best practices for C++ design and programming is better
supported by C++11 than by earlier versions. To do that, I discuss small code examples. I expect to
use the concurrency library, standard containers, and chrono. I expect to use initializer lists, move
semantics, variadic templates, lambda expressions, and type aliases. As usual, RAII (Resource Acquisition
Is Initialization) will feature large.

C++11 User-defined Literals and Literal Types

C++11 introduces user-defined literals (UDL) that allow a programmer to mark numeric or string literals with a dimension suffix, e.g., 15_s to denote 15 seconds. For non-standard UDLs all suffix names must start with an underscore. The talk will show, how to define such UDLs through operator”” _udlsuffix() overloads, shows what variations exist and how to achieve application of UDLs at compile time, if possible. There can be the need for determining the concrete integral type of an integer literal with a UDL suffix, for example, which requires some interesting application of variadic templates and meta-programming. It will also give guidelines which of the possible overloads to use when defining your own UDL operators and what rules to follow. It will further show the UDL operators to be expected from the next C++ standard and how to implement them DIY beforehand.

A second feature closely related to UDLs and compile-time evaluation are literal types in C++11. The biggest advantage of user-defined literal types is that they do not need to be POD, i.e., can have constructors, but still can be guaranteed to be initialized at compile time thus can be ROMable on embedded systems or will not become a synchronization nightmare in multi-threaded initialization. The talk will also show limitations of compile-time evaluation and will give a glimpse of what abilities to expect from future C++ standard versions with compile-time evaluation without the need for template-metaprogramming.

C++14 Early thoughts

I focus on small changes with a chance to make it into C++14, such as braces for copy initialization, return type deduction in functions (just as in lambdas), generic (polymorphic) lambdas, user-defined literals in the standard library, dynamic arrays, generalized constexpr functions, and “concepts lite” (template argument requirements). Much of this have been implemented and is being experimented with. If time allows, I'll present a few ideas of my own that could be important but have little chance of making it into C++14.

There will be an extensive Q&A on topics related to the evolution of C++.

C++ for Very Small Embedded Systems

Many embedded systems today have more computing power and memory
available than a typical workstation 10 years ago.
On such systems, C++ is the main programming language.

But there are still systems that have only a few kilo-bytes of
memory for the program text and its data.
For such systems many developers still believe that C++ doesn't work
and you have to resort to assembler or maybe C.

This talk will show how C++ can be useful in such very small systems
and programming techniques how to keep the overhead of C++ on such systems
(over C) at essentially zero.

Intended audience:
This talk is for embedded programmers who are tired to hear that C++
is not for them, or who actually believe this themselves.

C in the 21st century. Extensible languages with MPS

Many new programming languages are emerging theses days (e.g. http://emerginglangs.com/),
however C is still the most used language and serves as the root for most of the “new” languages.
Especially when it comes to the bare metal and real embedded development, there is nothing like C.

Compared with C++, C is admittedly less powerful / extensible (think of Domain Specific Languages
DSL) but fiddling around with template meta programming (TMP) is really only for a small group of
geeks. C++ has become a complex language (C++11 still is complex) and has lots of exceptions and
side corners for originally useful language concepts (e.g. rvalue references and the issue with default
generated move constructors/assignment operators, etc)

In C however we are painfully missing language concepts especially for developers in the embedded
area. Because of its minimal language core there is no support for real encapsulation, safe types,
operations with pre/post conditions, physical units and quantities, or common concepts in the
embedded domain like tasks, messages or state-machines.

In this session I will show how we build modular languages which special emphasis for developing
software for embedded systems. (The principle however is domain independent).
We show how to extend the C Programming Language with the language concepts mentioned above.
Embedded systems often support state-machines so there will be direct support for programs with
states, triggers, events and actions as first level concepts. Also different flavours of syntax (e.g.
textual, graphical and tabular) can even be mixed here. Expect a real usable physical units and
quantities language with code completion and error messages in the IDE (not possible with TMP).

I will present the power of modular languages and show how to build languages extensions with the
MPS Language Workbench MPS from Jetbrains. The mbeddr.com project (http://mbeddr.com) offers a
set of C languages extensions for embedded software development. A case study with one of our real
world sensors will demonstrate advantages and pros of the mbeddr solution.

C++ pub quiz

Join us for a pub quiz on C++! You will be working in groups where I
present interesting code snippets in C++ and you will discuss, reason
about and sometimes need to guess what the code snippet will print
out. There will be many educational snippets where we elaborate on the
basics of C++, but some of the snippets will be really hard with
surprising answers and where we explore the dark and dusty corners of
the language.

Some knowledge of C++ is essential, while experience with compiler
development and eidetic memory of the C++ standard is very useful.

CATCH - A natural fit for automated testing in C, C++ and Objective-C

Writing test code should be as easy as writing any other code. C++, especially, has been notorious for being a second class citizen when it comes to test frameworks. There are plenty of them but they tend to be fiddly to set-up and ceremonious to use. Many of them attempt to follow the xUnit template without respect for the language environment they are written for.
CATCH is an attempt to cut through all of that. It is simple to get and simple to use - being distributed in a single header file - yet is powerful and flexible.
This presentation introduces you to the usage of CATCH, highlighting where it is different - and takes a look behind the scenes at how some of it is implemented (as many have been curious).

Cheating Decline: Acting now to let you program well for a really long time

Programming, like mathematics, is often seen as a young person's game. Old programmers are supposed to “graduate” to become managers (so they can spend their time in meetings), architects (so they can spend their time in meetings about diagrams), or redundant (so they can go away and be forgotten). This talk is for programmers, young and old, who want to spend their time cranking out code until someone pries their cold dead fingers from the keyboard.

The talk will have two parts. The first will be summarized knowledge from successful old programmers and the people who work with them. The second will look at how, specifically, programmers decline, and it will contrast classical views about what programmers do with some of the claims of ecological (or embodied) cognitive science. If we consider ourselves animals acting in a world, rather than “brains in a vat” cogitating away, we can learn how to compensate for the inevitable.

Cleaning Code - Tools and Techniques for legacy restoration projects

“Too big to fail” is not often a term associated with software, but many companies rely on large
software systems that are business critical. Over time, the pressure of deadlines and other
forces can reduce the quality of these systems to the point where it impacts business. When
systems were small it was easier to push for a version 2.0 rewrite (this time we’ll do it right! With
EJB!) but that is just not an option for most large systems.

This presentation will describe techniques for managing large legacy restoration projects. It will
show how to create a technical roadmap and how to measure the value produced. It will show
how to prioritize technical debt remediation tasks. It will show how various tools and techniques
for visualizing different aspects of the development process. It will show how to put to use the
psychology of change to keep developers and stakeholders motivated during the process.

Code as a crime scene

Human intuition is unequaled when it comes to assessing the quality of a design. Intuition, however, is not without problems. It's prone to social and cognitive biases that are hard to avoid. Human expertise also suffers from a lack of scalability. As such, intuition rarely scales to encompass large software systems and we need a way to guide our expertise.
We need strategies to identify design issues, a way to find potential suspects indicative of code smells, team productivity bottlenecks, and complexity. Where do you find such strategies if not within the field of criminal psychology? Inspired by modern offender profiling methods, we'll develop a metaphor for identifying weak spots in our code. Just like we want to hunt down offenders in the real world, we need to find and correct offending code in our own designs.
The session will look into test automation, software metrics and findings from different fields of psychology.

Coding Dojo Challenge-Refactoring

In this hands-on session we will be looking at a rather smelly piece
of code which helpfully has a fairly comprehensive suite of automated
tests. Refactoring is one of the key skills of Test-Driven
Development, and this is your chance to really practice it. The idea
is not to rewrite the code from scratch, but rather, by taking small
refactoring steps, gradually transform the code into a paragon of
readability and elegance.

We'll be stepping into the Coding Dojo together,
which is a safe place designed for learning, where it doesn’t matter
if we make mistakes. In fact all the code will be thrown away
afterwards. You should feel free to experiment, try out different
refactoring approaches, and get feedback from your peers. The great
thing about this Kata is that since the tests are very good and very
quick to run, they will catch every little refactoring mistake you
make. You should experience how programming is supposed to be -
smooth, calm, and always minutes away from committable code. The last
part of the session is the retrospective, when we discuss what we've
learnt, and how we can apply our new skills in our daily production
code.

The code kata we'll be looking at is “Tennis”, and the starting code
is available on my github account,
(see https://github.com/emilybache/Refactoring-Katas/Tennis). The
code is available in various programming languages, including Java,
C++, Python. You should bring a laptop with your favourite
coding environment or IDE installed, or plan to pair with someone who has.

Concepts Lite-Constraining Templates with Predicates

In this talk I introduce a new language feature being proposed for C++14:
template constraints (a.k.a., concepts lite). A constraint is a predicate that
determines whether or not a template argument can be used with a template. Using
constraints, we can improve the declaration of templates by directly stating
their requirements, and we can also overload functions on constraints.
Constraints also allow type errors to be caught at the point use, meaning that
scrolling through dense stacks of compiler errors will soon be a thing of the
past.

As a language feature, template constraints are minimal and uncomplicated,
emphasizing correctness of template use rather than the correctness of template
definitions. This means that they can be adopted incrementally and easily into
an existing code base.

The talk will cover examples of how to use constraints with generic algorithms
and data structures, member functions and constructors, overloading, class
template specialization, and the definition of constraints themselves. I will
also discuss my experiences using constraints in day-to-day programming,
including some good ideas and some not-so-good.

An experimental compiler based on GCC-4.8 will also be made available to the
audience.

Culture Hacking

Culture hacking is the systematic development of culture in the workplace. In other words, a deliberate, continuous effort to develop a group's set of shared attitudes, values, goals, and practices that both describe and shape the group. Culture hacking originates with software people and is faithful to the particular ethos of software hackers. It's about modifying culture, instead of software, for personal betterment and the betterment of others. Agile, for example, is one big culture hack because it's a system of values, principles, methods and practices that together greatly influence culture on all levels of the organization. In this talk I'll start with an exploration of Culture hacking and why it's important, but the bulk of the talk will be my company's story of culture hacking over the past five years. I'll go over what all this hacking has taught us about what works and what doesn't at our company, as well as discussing how you can do culture hacking at your own workplace.

Death by dogma versus assembling agile

Almost all organizations, large and small, are turning towards agile to escape failing traditional software development projects. Due to this strong increase in popularity of agile approaches and techniques, many newcomers will enter the field of agile coaching. Many of them without the very necessary real-life experience but proudly waving their agile certificates proving they at least had two days of training.
During this challenging talk appreciated international speaker Sander Hoogendoorn, global agile thought leader at Capgemini, shows what happens with organizations and projects which are coached by well-willing consultants with little experience. Often this leads to very dogmatic applications of the more popular agile approaches, mostly Scrum and Kanban. This dogmatic thinking currently blocks the use of more elaborate techniques, tools and technology in agile projects, even when these would really improve projects. “No, you cannot do modeling in Scrum” and “Burn-down charts are mandatory” are two such simple real-life example statements. Due to this lack of experience and the growing dogmatism in the agile beliefs, more and more agile projects will fail.
But maybe even more important during this talk Sander will also show that there is no such thing as one-size-fits-all agile. Different organizations and different projects require different agile approaches. Sometimes lightweight agile, user stories, simple planning and estimation is just fine. But in many projects the way of working used should rather be built up from slightly more enterprise ready approaches, for example using Smart or FDD, smart use cases, standardized estimation, multiple distributed teams and on-line dashboards. During this talk Sander demonstrates how to assemble an agile approach that is specifically suitable for YOUR project, of course with many examples from real-life agile implementations.

Dynamic C++

Data from external sources comes in diverse types and brings along the need for datatype conversion. How can a C++ programmer accurately and efficiently transfer data from relational or XML database to JSON or HTML without stumbling over the C++ type checking mechanism? The answer is by using type erasure techniques; session will enumerate, explore and compare the most popular C++ type erasure solutions.

Given the above problem as well as both historical (ANSI C union and void*, MS COM Variant, boost::[variant, any, lexical_cast]) and recent (boost::type_erasure, Facebook folly::dynamic) development trends (including pending boost::any C++ standard proposal), it is obvious that there is a need for a way around the static nature of C++ language. There is also more than one solution to this problem; session will explore the internals of boost::[variant, any, type_erasure], folly::dynamic and Poco::Dynamic. Design, capabilities as well as pros and cons of each solution will be examined. Performance benchmark comparisons will be reviewed as well.

Type safety is an important feature of C++; type erasure is a necessary technique for modern software development. Session examines and compares existing solutions to these important concerns.

Effective GoF Patterns with C++11 and Boost

“With C++11 we broke all the guidelines, we broke all the idioms, we broke all the books” - Herb Sutter. Even the GoF-book is broken now too. Let us see how some of these patterns can be implemented with C++11 more effectively. And maybe you will be surprised with what happens to some of these patterns. We will have a look at some Boost libraries that already offer a generic implementation of some GoF-patterns and how their usage looks like.

Embedded Development, What's Changed in 30 years?

In my travels training and coaching embedded engineers, it seems not much has changed during my career of 30 plus years. Engineers debug with printf, equate single stepping with unit testing, run their code only in their target platform, and are obsessed with micro optimizations. Things have changed. C is much the same as it was all those years ago, but we have many improved techniques. We'll look at how to use TDD effectively for embedded C as well as the latest in faking, stub and mocking those problematic dependencies on hardware, operating systems and third-party packages. To get the feel for it, we'll write some code and try the ideas out. Bring your laptop with wifi and a browser and a friend.

Ephemeral Unit Tests Using Clang

Ever had to work on a legacy code base? In this talk we
will show how to use clang to generate unit tests on the fly for C++.
These unit tests can be used in the same way that any other unit test
is but they do not have to be stored: they are targeted at the
refactoring change you are making right now. Once refactoring is
complete and the unit tests pass (or fail in precisely the expected
ways) they can be deleted. Further code changes/refactorings are
accommodated by simply regenerating the unit tests as needed. We will
also talk about strategies for measuring code coverage and generating
unit tests that guarantee every branch is tested.

Extreme Startup

In this hands-on workshop we aim to simulate product teams building software and delivering it into a market. Attendees form teams and compete to build the best product. Through the session you can continue to refine and upgrade your software, releasing new versions and testing their performance in the market. Once your software is live it will begin to accrue points, as simulated users use the software and score it against how well it fits their needs. The earlier you release your software, the sooner you will start accruing points, and the earlier you can learn something about the market, which should inform your next iteration. In the lean startup movement, this is known as the Build-Measure-Learn cycle.

The aim of the workshop is to simulate software development in a quickly changing environment, where agile techniques should excel. How quickly can we iterate? What are the bottlenecks? Which techniques are most valuable? Do any fall by the wayside? Are any particular languages better or worse in this environment?

Each team needs at least one developer, but product managers can also actively take part in the simulation.

Teams need a laptop (or more than one) with development tools allowing them to build and run a small webapp in a language of their choice (e.g. Ruby, Java, Python, C#, nodejs, Scala etc etc).

As a practical session, this is a great chance to have some fun showing off your coding skills as well as your project management strategies, and hopefully think about the above questions.

This workshop has been run successfully at a number of conferences including: XPDays, XP2011, ROOTS 2012 and Agile on the Beach.

Fear and loathing on the agile trail

Many people may be familiar with the 'Fear and Loathing in Las Vegas'
book (Hunter S. Thompson) and film (Johnny Depp/Terry Gilliam). Less
well-known is a collection of his articles that cover the collapse of
the Democratic party during the 1972 US presidential campaign, 'Fear
and Loathing on the Campaign Trail'. This session, without recourse to
Hunter Thompson's excesses or Ralph Steadman's illustrations, will
make the case that agile, despite a unifying manifesto, has
conflicting interpretations and implementations that undermine the
concept of agility. We will examine agile adoption from first
principles to develop an understanding of why many agile teams aren't
deriving the benefits they hoped for, and why many Agile teams are
not, in fact, agile.

Using an interactive, choose-your-own-adventure style storyline, based
on real-life experiences from many teams, this session explores common
anti-patterns experienced by new and experienced agile teams. In each
phase of the adventure we will learn how to recognise mis-application
of an agile principle or practice, what forces might be causing this,
and some interventions that can be helpful in overcoming them.

Since so many of the issues faced by teams are context dependent, we
will use input from the attendees to frame the story throughout the
session. Even though the resulting context may not apply directly to
the attendee's teams, the storyline will provide a useful tool to take
away and use to gain insights into specific agile environments.

From plans to capabilities

Agile can be seen as a shift from planning towards a more capability based way of solving problems. Different agile approaches balance planning and capability in different ways. Many agile adoptions fail because of a conflict between a planning mentality in the organization and the more capability based mindset in agile. The relationship between agile and planning is evolving with ever more capability based approaches gaining in popularity. The goal of the presentation is to show how you should let your context guide the amount of planning you need in your software development process. You will also learn to tell the the difference between a Chuck Norris and a Cowboy process.

Functional Programming for the Dysfunctional Programmer

Functional Programming is undergoing a surge in popularity with new
and exciting languages, blog posts and books appearing every week. FP
techniques are especially powerful when dealing with modern multi-core
processors and distributed systems, even though the theories on which
it is based are decades old. But despite the promised benefits, FP can
be difficult for programmers to grok.

During this tutorial you will solve lots of coding problems,
demonstrating that FP can be thought of as a style of programming
rather than a language feature. And you’ll see that by adopting
techniques and idioms from FP you can make your code cleaner and safer
no matter which language you use.

This will be an extremely practical session in the style of a dojo or
code retreat. You will be programming many tasks throughout the day
and so you will need a laptop with your text editor of choice. You
will not need any experience of FP nor any theoretical background, but
if you have either of those then bring them to share!

Generic Programming in C++: A modest example

In this session, I will take a request from the boost mailing list: “Why doesn’t boost have hex/unhex functions, I think they would be useful” and walk through the design and implementation of these algorithms for the Boost.Algorithm library.

Although the functions are simple, there are a surprising number of interesting design decisions that were made along the way – and I will explore them in this talk.

Issues covered:

Generic programming design

Dealing with iterators (including the problems with output iterators)

Template metaprogramming (including enable_if)

Boost.Exception

Fit and polish of code.

Note: If you want a sense of the presentation, here is video (and slides) of an older version

Getting Legacy C/C++ under Test

C++ has a rich set of libraries which support the creation of fake and mock objects (i. e., test doubles). The overwhelming part of them are based on subtype polymorphism by inheriting the test doubles from a common base class to be able to inject them as well as the real objects into the system under test (SUT). This has the known disadvantages of decreased run-time performance and the software engineering issues that come along with inheritance like tight coupling and fragility. Beside this, these libraries often lack an integration into an IDE.

We at the Institute for Software, are eager to improve this situation and address this with Mockator Pro, a new mock object library and a supporting Eclipse C++ Development Tooling (CDT) plug-in that assists the user in creating test doubles. Mockator Pro both supports C++03 and the new standard C++11.

Beside the mentioned subtype polymorphism - which is supported by a new „extract base class“ refactoring - Mockator Pro offers static polymorphism to inject the test double into the SUT via template parameters. The test doubles are realized as local classes, therefore located in the same function as the unit test code, leading to an increased locality that makes it easier to keep them in sync. A third form of mocking is the use of link seams which allow us to replace existing functions with our own test double functions without touching the SUT.

This talk shows how useful Mockator Pro can be for getting rid of fixed dependencies in your code base by extracting template parameters and base classes. You will see that Mockator Pro is able to generate code for test doubles that allows you to track if your SUT is properly using them. Our static code analysis checkers recognize missing member functions, constructors and operators in the injected test doubles and provide default implementations for them through Eclipse quick fixes.

Beside a practical session with many code examples we will also give an introduction into testing with mock objects in general and compare Mockator Pro to other well-known mock object libraries. Additionally, we will talk about the use of C++11 and its new features in our mock object library that allowed us to greatly reduce the need for preprocessor macros, therefore providing transparency and the chance to debug when problems arise.

Git - Why should I care about the index?

One of the unique features of Git is its “index” but it is often poorly
understood and frequently cited as confusing, especially for newcomers to Git.

What is the index and why does Git have it?

The index is a staging area for your next commit. It is also a “stat” cache
which ensures that Git has the performance characteristics that it needs. The
index is also the merge resolution area. The fact that the index supports
these diverse purposes contributes to its conceptual complexity.

To dispel some of the confusion that surrounds the index we look at the
internals, what is stored in the index and how it is stored.

We look at what operations read and update the index in normal usage. We also
examine what commands we can use to deliberately affect the index and why we
might want to perform them.

The inspiration lasted, and in early 2012 Alan joined a team at
Canonical where the right conditions existed to put these practices into
effect. This talk follows a project developing a C++ systems component.

Every organisation is different and, in addition to the common learning
curves for team members (TDD, C++, C++11, OO, problem and solution
domains) they were also faced with geographical distribution across
timezones. Fortunately, with a bit of intentional practice, the tools
for remote working are, at last, up to the job.

The talk will cover the organisational, process, and technological
challenges and the solutions adopted.

Gumption traps Reloaded

In the book “Zen and the Art of Motorcycle Maintenance: An Inquiry into
Values” the author, Robert Pirsig, talks about “gumption traps”; things
that sap motivation, such as not having the correct tools. In this
workshop, we invite participants to identify gumption traps. We will be
drawing influence charts, an approach from Systems thinking, to help
groups explore suggestions for how to avoid or combat gumption traps.
We ran workshops on this topic at OOSPLA 2004 and ACCU2006, but this
incorporates some new material building on ideas from Daniel Pink's book
“Drive”.

Intended audience:
Software developers, managers and coaches who would
like to avoid gumption traps for themselves and their teams.

Process:
The purpose of the opening presentation is to introduce the topic and lead
into group discussion. Participants are invited to share relevant stories
with the session group, this leads into a brainstorming session on
drawing out factors affecting motivation in software development.

The session participants will then be divided into smaller work groups and
given a worksheet on influence charts. Each work group will select a story
based on their own experience and work to create an influence chart that
shows factors impacting motivation.

To wrap up, each work group will take a turn to present their chart and
insights gained to the session group.

Timetable:
00:00 - 00:05 Introductions
00:05 - 00:20 Slide presentation on identification of gumption traps, suggestions for how to avoid them
00:20 - 00:25 Questions
00:25 - 00:40 Sharing Gumption trap stories
00:40 - 00:45 Divide session participants into work groups
00:45 - 01:15 Each group explores a story using influence charts (aka diagram of effects)
01:15 - 01:30 Each group presents what they learned to the session group

Health and Hygiene in the Modern Code Base

We all know what good code looks like and we know what our current
code looks like. But, do we know what normal is? What code is most
likely to be and how it comes to be that way. In this workshop,
Michael Feathers will lead you through a series of code readings, and
convey measures of code quality across a number of domains. You will
leave with a realistic sense of the limitations of code as a medium,
and the areas where we can legitimately expect excellence and promote
practice to foster it.

How to Narrow Down What to Test

Nowadays testing, especially writing automatic test cases, costs a lot. This looks like an extra expense in short time, but saves a lot of trouble in the long run. However, not every organisation can afford to spend expensive coding time on testing that considered as no real value to the customer.

The best way to do this is to be effective, so test those parts of the code which really need to be tested. In my presentation I'm going to share several methods that can be used to find areas which are worth testing so that organizations do not have to spend more effort on testing than what is absolutely necessary. These methods will be presented on java and ruby on rails examples.

How to program your way out of a paper bag

Frequently programmers complain that people they interview, or
colleagues, or blog writers clearly couldn't program their way out of
a paper bag. This fills me with fear, since I have never tried and
therefore am not sure I can program my way out of a paper bag.
Anecdotal evidence suggests if you can code FizzBuzz that's good
enough. I will investigate the cause of the cry of dismay, quickly
discounting FizzBuzz competence as proof of paper bag escapology.
Demonstrations of how to escape a paper bag will be given, taking
inspiration from machine learning (ML) algorithms as a starting point.
No background in ML is assumed.

Hybrid programming

The presentation will discuss architectural question of using multiple languages in a project. We often need flexibility in parts of a project and maximum performance in other parts. Various solutions of that problem will be discussed. In particular use of interpreted language for the bits that need flexibility and compiled one for the performance sensitive bits. The presentation will show most convenient ways to mix various languages with a larger example written in Lua and C++. A short introduction to Lua will be done.

Is eXtreme Programming still alive and kicking?

Back in 2000, I worked for 3 years as a Java developer at Connextra, one
of the first companies trying eXtreme Programming in UK. If you've ever
been asked to write stories using the “As a ..I want..so that..” way then
blame us - we were also the originators of Mock Objects. Perhaps because
of the scary moniker which implies a full-on approach, other agile
approaches have become more popular across industry in subsequent years.
It's interesting to note that no equivalent organisation to Scrum Alliance
or Lean SSC exists that is dedicated to promoting XP and advancing the
state of practice unless you count London's very own eXtreme Tuesday Club.
XP has therefore become more of a grass roots approach for software
developers with most organisations opting for much less extreme agile
approaches although still pulling in milder XP practices such as user
stories, velocity, and test-driven development.

I've worked as an independent consultant helping teams figure out how to
apply Scrum in various context and seen some cool and crazy things done.
In June 2012, I started work as a coach at Unruly Media, a company founded
by some of the original team members at Connextra. Immediately before
this, I also worked for Industrial Logic who built upon their own
IndustrialXP method and re-evaluated many XP practices from a Lean
perspective. Over the last year, I've found it really interesting to see
how many old-school XP practices are still helping developers and where
gaps remain (such as working with UX and Infra specialists). It's also
been interesting to see how open our XP team is to “embracing change” and
experimenting with ideas from Kanban and Scrum.

Come to this session if you have an interest in hearing about current
state of XP and how it's been evolving.

Java 8 a new beginning

Java 8 will introduce lambda expressions to Java, and include a whole new library – a bigger change to Java
than Java 5. The change is a simple evolution, but is also a complete revolution of Java.

Java is now often portrayed as boring, staid, a legacy technology. Yet the Java Platform based on the JVM is
a vibrant arena: Scala, Groovy, JRuby, Clojure, Jython, Ceylon, Kotlin – a mix of static and dynamic
languages pushing the use of the JVM to new places. Java has to compete with Scala, Ceylon and Kotlin in the
“static language for the JVM” crown; the association of Java as the language for the JVM is long past. Can
the changes to Java in Java 8, and later Java 9 and Java 10 (there is a road map all the way to Java 12),
sideline Scala, Ceylon, and Kotlin, or is it already too late for Java?

What are the features of Java 8? Why are they just copies of what is in Scala (and possibly Ceylon and
Kotlin)? Are Ceylon and Kotlin at all relevant to the JVM-based world? What is all the fuss about? Come to
this session and join in answering some, all or none of these questions.

Lightning talks

Each session of lightning talks is a sequence of five minute talks given by different speakers on a variety of topics. There are no restrictions on the subject matter of the talks, the only limit is the maximum time length. The talks are brief, interesting, fun, and there's always another one coming along in a few minutes.

We will be putting the actual programme of talks together at the conference itself. If you are attending but not already speaking, but you still have something to say, this is the ideal opportunity to take part. Maybe you have some experience to share, an idea to pitch, or even want to ask the audience a question. Whatever it is, if it fits into five minutes, it could be one of our talks. Details on how to sign up will be announced at the event, or simply collar Ewan whenever you see him in the hallway.

Location, location, location

Geospatial information is everywhere - nearly every smartphone has a GPS chip in it, and your IP address gives clues to your physical location to every web site you visit. Between smartphones and sat-navs, most of us now have devices that are recording our location and using online services that use that location.

From finding the nearest pub in a smartphone app to analysing the spatial relationships of billions of GPS tracks, there are lots of tools, many free and open source, that can help you make sense of geospatial data quickly and easily. It has never been easier to integrate sophisticated location-based analysis into your application or business.

In this session we will learn about the key concepts behind geospatial data and analysis, including GPS, coordinate systems, projections, and different types of spatial relationships. We will discover the features of open source geospatial databases that let you query 2D and 3D data using SQL, GIS tools, and some of the online APIs that let you to add mapping, geocoding and more to your application.

To finish off, I'll show you how to build a web service that finds the nearest UK postcode to any latitude and longitude, performing spatial SQL queries using an open source database and data freely downloadable from the Ordnance Survey OpenData project.

Logic Programming and Test Data Generation

Primum: logic programming computes values for variables based on relationships between known facts. In the main, introductions to it are either based on logic puzzles (cannibals and boats!) with an at best unclear relationship to the problems we write programs to solve, or on laboriously reimplementing things (arithmetic!) that we can already do perfectly well, thank you very much.

Secundum: generating complex test data is a hard problem, in part because constraints (relationships) amongst bits of the data have to be obeyed. The sadly common result is fragile tests that know too much about the details of their data.

Ergo: I will explain logic programming by showing how it can be used to generate test data from minimal descriptions of what's needed.

Managing from the Mountaintop

Whether you generally
work with remote teams or are managing the risk of things like a
snowmageddon or the London Olympics, this talk is for you. A case
study in implementing an agile toolset that increased transparency
between the shop floor and TPTB and allowed me to manage my team from
the Himalaya.

Measure and Manage Flow in Practice

Measure and Manage Flow is the third of the core principles of Kanban. It means that the members of the organisation are supposed to measure their progress and use the gathered information to improve their way of working. The most famous measurement tool for Kanban is the Cumulative Flow Diagram, but there are other usable approaches out there.

During the last two years I tried out those different measurement approaches, and in my presentation I’m going to show you those which worked well for me. I’ll also cover how to manage your organisation by using the gathered data - e.g. how to use lead time for fine tuning the delivery process - and scientific methods to ensure that the changes are permanent and the organization moves forward.

Methodology a la carte

In the software world we have been looking for “The Methodology” to solve our software development sorrows for quite a while. We started with Waterfall, then Spiral, Evo, RUP and, more recently with XP, Scrum and Kanban (there are many others, but, their impact, so far, has been more limited).
In this session I'll argue about the fact, that an out-of-the-box methodology can be no more than a starting point, and that team members need to tailor it to the specific needs of their project by keeping into account their surrounding context–e.g., company culture, constraints, team preferences, etc. Furthermore, I'll propose an approach to build a custom methodology that, instead of starting from one or more out-of-the-box ones, starts from the team goals and constraints and what we know are good practices that can be used to deliver a better product in a more satisfactory way. A methodology a la carte.

Move, noexcept, and push_back() and how they relate to each other

One key feature of C++11 is move semantics with rvalue references.
However, combined with other features and guarantees of the standard library the consequences of introducing move semantics turn out to be remarkable. In fact, late in the standardization process this features caused the
new concept for exception handling using the new keyword noexcept.
The reason was to remain backward compatibility of push_back() for vectors.
This talk will jump into the whole mess of this topic of move semantics and exception handling.
It give a rough understanding of what move semantics means for
class designers and why and how good class design even more becomes an issue with C++11.

Organizational influence hacks

In this session Roy covers six areas of influence that we can use to change the behavior of other people and ourselves. These areas can also help in answering magical questions such as “why is that person not willing to do TDD?” or “WHy is that person always late to the standup meeting?” . You can read up on this at http://5whys.com.

OTP, the Middleware for Concurrent Distributed Scalable Architectures

While Erlang is a powerful programming language used to build
distributed, fault tolerant systems with requirements of high
availability, these complex systems require middleware in the form of
reusable libraries, release, debugging and maintenance tools together
with design principles and patterns used to style your concurrency model
and your architecture.

In this talk, Francesco will introduce the building blocks that form
OTP, the defacto middleware that ships with the Erlang/OTP distribution.
He will cover OTP’s design principles, describing how they provide
software engineering guidelines that enable developers to structure
systems in a scalable and fault tolerant way, without the need to
reinvent the wheel.

Talk objectives: Introduce a powerful framework which reduces errors and
helps developers achieve robustness and fault tolerance without
affecting time to market.

Parallelism in C++1y

Parallelism and multi-threading are two of the main topics for the
next versions of C++. Some additional concurrency support might even
show up in the small planned revision C++14, and some bigger additions
are discussed for C++17.

This talk will present some minor additions for concurrency support
in C++14, as well as some more substantial proposals for C++17
targeting mainly real parallelism (and not just multi-threading)
and generally asynchronous programming models currently discussed
in the C++ standardization committee.

Intended audience:
This talk is for programmers and designers who are interested in an
overview of discussed concurrency additions in the next revisions of C++.

Pattern-Oriented Software Architecture

Patterns offer a successful way of exploring, reasoning about, describing and proposing design ideas. There are many valuable aspects of pattern-based thinking that are overlooked in the common perception of design patterns. The original vision of patterns embodies a notion of incremental, feedback-based design – something that may come as a revelation to anyone who had mentally pigeonholed patterns together with heavier-weight design approaches. They are also somewhat broader in application than just OO framework design – something that may come as a surprise to anyone who had restricted their view of patterns to the handful of initial patterns documented by the Gang-of-Four.

This session will start off with basic pattern concepts and practices, with examples, and work through a number of more sophisticated ideas, such as the relationship between pattern-oriented thinking and incremental development, patterns and architectural styles, and how you can mine patterns in your own systems.

Real Architecture-Engineering or Pompous Bullshit?

What should software architecture be? How is it related to major critical software qualities and performance, to costs and constraints? How do we decide exactly what to propose, and how do we estimate and prove it is justified. How can an organization qualify their own architects, and know the difference between the frauds and the experts? Would real architects recognize what software architects know and do?

We believe that most activity, going under the name architecture, is NOT real. Current Software architecture is no more real architecture than hackers are software engineers.

If we are just informally throwing out nice ideas, let us call ourselves Software Brainstormers. But if we are dealing with large scale, serious, and critical systems, then we need to stop using cabin-building methods and start using skyscraper designing methods. We need a serious architecture and engineering approach.

Refactoring to Functional

Knowing functional techniques leads to better object oriented code, just as knowing about objects leads to better procedural code. The trick is getting from here to there.

At a previous XpDay, several of us found that we'd developed a similar approach to writing Java which includes a strong bias towards immutability and functional code within objects. The game changers have been a JVM that makes transient objects cheap, and Google nearly-lazy collections library. We've found that the resulting code style is easier to understand and less prone to certain kinds of bug. Working in this style, we've found that there are some common patterns for moving to a “functional-inside” approach to code.

In this workshop, I will present some techniques for moving from imperative to functional code, with worked exercises for the participants to join in.

Robust Software - Dotting the I's and Crossing the T's

It’s been said that the first 90% of a project consumes 90% of the time, whereas the
second 10 % accounts for the other 90% of the time. One reason might be because
elevating software from “mostly works” to robust and supportable requires an attention to
detail in the parts of a system that are usually mocked out during unit testing. It’s all too
easy to focus on testing the happy paths and gloss over the more tricky design problems
such as how to handle a full disk or Cheshire cat style network.

This session delves into those less glamorous non-functional requirements that crop up
the moment you start talking to hard disks, networks, databases, etc. Unsurprisingly
it will have a fair bit to say about detecting and recovering from errors; starting with
ensuring that you generate them correctly in the first place. This will undoubtedly lead
on to the aforementioned subject of testing systemic effects. Finally there will also be
diversions into the realms of monitoring and configuration as we look into the operational
side of the code once it’s running.

At the end you will hopefully have smiled at the misfortune of others (mostly me) and
added a few more items to the ever growing list of “stuff I might have to think about
when developing software”.

Ruby and Rails for n00bs

You 've heard all the hype about ruby and you've heard how great rails is, but you've never got far in it. In this workshop, you will gain an understanding that most tutorials skip in favor of generation voodoo magic.
We'll be building a small application from scratch in ruby on rails. If the networking gods permit it, we'll have a working online application after the session.

Each step along the way will follow the same pattern:

explain the goal

formulate the goal as a test

see the test fail

implement the step

see the test work

refactor if necessary

commit

release if the feature is finished

Aim: developers who never used ruby or rails come out with the basics of the ruby syntax and idioms, and can get started building a simple rails application while understanding each step of the way.

Server login considered harmful - introduction to devops practices

The DevOps (Development and Operations together) and the cloud sparked a renewed interest in configuration management tools that generate configurations for one or more servers. Stephann the past two years together with Stephan Eggermont I have used both puppet and chef, two relatively new configuration management tools.

While doing this, we learnt that each time we logged in to our servers, we were building up technical debt for our configuration management. Hence 'Server login considered harmful'. We try to 'import software development;', use practices like continuous integration and TDD. However we have to keep ourselves honest: as long as we can't recreate a server completely from our scripts, we are not done.
We will explain why we do this, and what obstacles we encountered. After that we will introduce Chef as a way to do 'infrastructure as code' and show how we work with it by doing some live coding. We will invite some participants to pair with us for the audience.

Come to this presentation to learn from our DevOps mistakes and successes, see if you can benefit from the tools we use, and if you also have experience, there is room to exchange ideas.

SFINAE Functionality Is Not Arcane Esoterica

The phrase “Substitution Failure Is Not An Error”, commonly known as
SFINAE, refers to part of the template argument deduction rules in
C++, but what does it mean and why is it important?
This session will explain the rule and why it's needed to make
function templates usable. We'll also see how C++11 extended and
changed the rule into the more powerful “Expression SFINAE”. We'll
see how std::enable_if and similar utilities can be used create
template libraries and APIs that are easier to use and harder to
misuse, with clearer error messages than one usually expects from
template problems. We'll also cover common misunderstandings and
pitfalls people encounter when using SFINAE.
The session should be suitable for anyone with a working knowledge of
C++ templates, especially library and API designers who are prepared
to trade off some implementation complexity to provide better
interfaces.

Taking Scala into the Enterprise

This will be 75 minutes presentation (+15 mins Q&A) about Scala aimed at
advancing beginners. The talk will concentrate on bringing how best to
bring this ever popular object functional language in an enterprise.
At the very beginning, there will be some of the basics including case
classes, object functions and the collection framework. The talk will
be about getting the most out of Scala, working with popular Java
frameworks, the build tools and will touch on some of the new features
of upcoming Scala 2.10.

Test driven development on the Raspberry Pi

This will be bits of presentation mixed with a prepared kata to show how we do it.

Doing test-driven development for embedded devices is possible. It has
its' own set of constraints, such as:

limited availablity of hardware due to time or cost constraints

some hardware will not run your favourite programming language

hardware is slow and/or has a limited amount of memory available

real-time constraints on several parts of the software

you often have to combine multiple devices, each with their own hardware and software interfaces

Luckily Moore's law also applies to embedded development so we can
increasingly use higher level languages, which make practices like test
driven development and continuous integration feasible for embedded
development as well. And with some creativity we can write end-to-end
tests for the parts of our solution that defy unit testing.

In this presentation we will show how we developed a soft-drinks vending
machine prototype using raspberry pi's, arduino, lego and various bits
and pieces. See how we used test driven development and hexagonal
architecture to keep our code clean and our minds sane.

Description of the hands-on session to go with it:
Embedded TDD on the Rasbperry Pi hands-on
On the one hand, 'embedded' becomes more and more software. On the other, software gets embedded in more and more things. With devices like the Raspberry Pi and Gumstix rolling your own becomes feasible. At least we can experiment with getting fast feedback cheaply.
Test Driven Development (TDD) can drive your design and give fast feedback on the quality of your work. Doing this on an embedded device gives some additional challenges - it's often not so easy to talk to the device & get your software on it, while the choice of programming language is often limited (C anyone?).
Join this session to have some fun with TDD on a Raspberry Pi with a two line display and some buttons. Add a feature to our vending machine by writing some end-to-end tests and unit tests first.
If you're lucky, you can run your tests on the Raspberry Pi from our buildserver, but since like in the 'real world' 'the device' is not that often available… you'd better be lucky, or develop with discipline ;)

The Actor Model applied to the Raspberry Pi and the Embedded Domain

The Actor Model has interesting properties that could be used for dealing with complexities posed by modern embedded systems. Using actors as compositional units to describe these systems is a new proposal which stands out and challenges conventional approaches.

This talk will demonstrate how, creating a layered architecture for hardware modules and partitioning up complex systems in smaller units, testing becomes much easier, runtime errors are contained, and the architecture becomes maintainable.

Talk objectives: Provide an overview of the embedded systems design methodologies and introduce Erlang Embedded, a new proposal to deal with the issues we face in today's complex embedded systems.

The art of reviewing code

Making sure that the code you write is seen by at least one more person before it goes into production is a great way of increasing the quality of your code. One way of doing that is via code reviews, where code is being checked by peers or code owners after it has been written. Code reviews are gaining popularity again in many companies and communities.

It is also often an unpopular measure among developers and managers, for various reasons that have a lot to do with the time used to execute them and the fear that developers might have of getting critique on the code they created themselves. In this talk we first look at the advantages and disadvantages of using code reviews. We will then examine how to incorporate code reviews into existing processes. And we go into the fine art of giving and receiving code critiques: how much can be done in a code review, what types of critiques are useful to give, how to handle critiques that you don't agree with, and how to handle conflicts that might arise from this.

The bright side of exceptions

In many programming languages, the term “exception” really means “error”. This
is rather unfortunate because an exception is normally just something that
does not happen very often; not necessarily something bad or wrong.

Some ancient languages like C don't support exceptions at all. You need to
indicate them with specific return values from functions. Languages with
explicit support for exceptions (e.g. Java, C++ or Python) provide built-in
facilities for handling them. The most traditional approach to this is the
“try/catch/throw” system, whatever it may actually be called in your favourite
language. As it turns out, this system suffers from limitations which affect
its usability in complex situations. The two major problems are 1. the
obligatory stack unwinding on error recovery and 2. a two-levels only
separation of concerns (throwing / handling).

In this talk, we will demonstrate the benefits of using a system which does
not suffer from these limitations. More precisely:

- the stack is not necessarily unwound on error recovery, which means that the
full execution context at the time the error was signalled is still
available,

- the separation of concerns is 3-fold: the code that signals an error (throw)
is different from the code that handles the error (catch) which itself is
different from the code that chooses how to handle the error (restart).

It turns out that an exception handling mechanism like this is able to handle
more than just errors and in fact, even more than just exceptional events. In
Lisp, this system is called the “condition” system. Conditions are the bright
side of exceptions: not necessarily bad, not even necessarily exceptional.
Conditions become an integral part of your programming paradigms toolkit. We
will provide two examples of “condition-driven development”. The first one
will show how to handle actual errors, only in a more expressive and cleaner
fashion than with a regular try/catch/throw system. The second example will
demonstrate the implementation of something completely unrelated to error
handling: a user-level coroutine facility.

The Git Parable

Learning and using Git commands is all well and good, but until you have a
working understanding of how Git itself thinks and works, it will still feel
like a strange beast with lots of sharp and pointy bits.

Based on Tom Preston-Werner's essay of the same name, this introductory Git
talk will start from scratch - using simple concepts like a text editor and
simple file system operations - to develop a simple and straightforward
version control system that is very similar to Git. This gives you a mental
model of how Git works, which will help you use Git more effectively and
might even clear up some common misunderstandings if you come from
centralized version control background.

The talk covers how Git does branching and merging in a distributed (and
partially disconnected) environment, how Git allows you to rewrite your
commit history to present a prettier set of changes to your peers, and
also why the concept of a staging area is so useful in your day-to-day
work. Finally, the talk explores some of the techniques that Git employs
to become incredibly fast and space-efficient.

Git might still have some sharp and pointy bits, but after this talk you
should be better equipped to understand how they can work to your advantage.

The talk is followed by a Q&A session, in which the speaker makes a futile
attempt at answering any and all Git-related questions that might arise.

A history of a cache

This is the story of a cache component and its evolution from a simple C++ std::map to a high-performance, multi-threaded cache in shared memory called from legacy code and all without changing the calling code's interface. This talk will cover topics such as the bugs that TDD didn't catch and how they were found, how to test concurrent software, how the design and use of the interface critically affects the design of a concurrent class and its iterators, the use of invariants, the joys of using and managing shared memory and of interfacing to legacy code, and some of the key drivers behind an increase in performance of 30 to 40 times over the original architecture and code.

The true cost of software bugs and what to do about it

Bugs have been estimated to cost the global economy $600bn annually, making debugging an endeavour of similar scale and impact to solving the Euro crisis. Yet startlingly little attention is paid to the problem by wider society or by the industry itself. In this talk we present the results of our own work with Cambridge University to better estimate the economic costs of software bugs, and go on to examine the state of the art of techniques and technologies to address the burden.

We split the problem of debugging into three categories, and discuss tools and techniques to address each:

Preventing bugs in the first place: programming languages and techniques to reduce the number of bugs created.

Finding the bugs lurking in your software: static and dynamic analysis tools and testing techniques to uncover the bugs before your customers see them.

Panic debugging: tools and techniques to help find and fix the bugs found during development, testing, or (worst of all) reported by end users.

We show some of the more interesting work to address each of the above, including an overview of free (as in speech) software tools as well as proprietary ones. The talk focuses on practical use of tools and advice, and explicitly does not cover the more social aspects (methodologies etc). In particular we look at use of languages and libraries in order to prevent bugs, preventative tools such as Clokwork, CoVerity Prevent and Valgrind, advanced debuggers such as UndoDB and the tried and trusted, and much maligned, printf.

We also examine the economic and psychological barriers to preventing broader adoption of the tools and techniques we cover, and present the results of recent research quantifying the benefits that can be obtained by using more advanced tools and techniques to deal with bugs.

Transactional Memory for C++

SG5 plans to bring forward a proposal for two types of transactions based on V1.1 of the Draft Transactional Memory for C++ that has been worked on for 4 years.
https://sites.google.com/site/tmforcplusplus/
This proposal supports 2 types of transactions:

an isolated transaction that isolates from non-transactional code (as well as other transactions) through some kind of safety annotations

an ordinary transaction that allows communication with non-transactional code (but is isolated from other transactions)

We further show different techniques for supporting various levels of safety annotation, from fully static compiled time checking to some levels of dynamic checking to ease the burden for programmers.
It is the intention of the group to bring forward a fully worded proposal for Bristol 2013 as a Technical Specification.
Now some of you have wondered if it is too early for TM. Let me say that HW is coming, with Intel's recent Haswell announcement, and IBM's BG/Q, and previously Sun's Rock. SW TM support has been here for quite some time with Intel's STM support of the 1.0 Draft, and most recently GCC C++ 4.7's nearly full support of Draft 1.1.
And if you think it is still too early, let me say that one of Hans Boehm's discovery was that locks are impractical for generic programming, because ordering of locks is generally not visible until instantiation. With the introduction of locking (and atomics) in C++11, this now becomes a difficult problem to avoid. Transactional memory is one way to solve the problem. It also helps for fine-grained locking on irregular data structures, and read-mostly structures.
In this talk, we will present the proposal for Standardization in C++, including supporting evidence of the usage experience, and performance data.
Finally, if you are still wondering is transactional memory fast enough? There are many different software transactional memory systems with different performance characteristics, so there is probably going to be one that fits your needs.
TM is coming in many different forms (HW, SW, hybrid systems, lock elision), and for C++ to remain in a good place with the many other languages that already support TM, this is the right time to be prepared with a sound proposal.

Unspecified and Undefined

Strange things can, and will, happen if you break the rules of the
language. In C there is a very detailed contract between the
programmer and the compiler that you need to understand well. It is
sometimes said that upon encountering a contract violation in C, it is
legal for the compiler to make nasal demons fly out of your nose. In
practice, however, compilers usually do not try to pull pranks on you,
but even when trying to do their best they might give you big
surprises.

In this talk we will study actual machine code generated by snippets
of both legal and illegal C code. We will use these examples while
studying and discussing parts of the ISO/IEC 9899 standard (the C
standard).

Use the Source

Using C or C++ source for any user created tool used to be near impossible due to the complexity of the languages, especially for C++. The open source compiler clang <http://clang.llvm.org/> provides access to its internal data structures exposing all details of the source via a library interface. This library provides a basis for various tools like code-completion hints, static analysis, code transformations, etc. This presentation gives an introduction to create your own tool using C++ source.

Using data to understand how you develop software

What do you do once you've achieved agility? Once you can respond to change from the business and you're delivering value predictably. Once you've adopted all the useful technical practices. Once you've created an environment that amplifies learning? Where does your next improvement come from? A report about how we're trying to use data to better understand the things that work in our context. I'll explain the meaning behind the metrics we're collecting and what we've learnt from them.

I'll also be asking the audience to share their experiences of trying to use data to improve what they're doing.

What the C++ Library Working Group did next

The C++ Library grew significantly for the C++11 standard, embracing new
language features such as move-semantics and list-initialization, adding
basic support types like function and tuple, adding larger facilities such
as the regular expressions and extensible random number facility, and adopting
the new memory model and providing basic concurrency primitives such as atomic
operations, threads and locks, and a basic futures facility.

So what comes next?

The standard continues to evolve, and if anything the pace is accelerating
The ISO working group has initiated, at the time this is written, 10 distinct
study groups to investigate ways to move the language and library forward, and
there is interest in adding more! What will the next generation of C++ libraries look like, and what are the most active topics of interest for likely extensions?

As the current chair of the Library Working Group, Alisdair Meredith is
uniquely placed to talk about their recent accomplishments, current plans, and
future directions.

Worse Is Better, for Better or for Worse

Over two decades ago, Richard P Gabriel proposed the idea of “Worse Is Better” to explain why some things that are designed to be pure and perfect are eclipsed by solutions that are seemingly compromised and imperfect. This is not simply the observation that things should be better but are not, or that flawed and ill-considered solutions are superior to those created with intention, but that many solutions that are narrow and incomplete work out better than the solutions conceived of as being comprehensive and complete.

Whether it is programming languages, operating systems or development practices, we find many examples of this in software development, some more provocative and surprising than others. In this talk we revisit the original premise and question, and look at examples that can still teach us something surprising and new.