Preconference

Domain-Specific Languages and Model-Driven Development have moved from scattered successes, through industry hype, to increasingly widespread practical use. Well-attested benefits include raising the level of abstraction, improving productivity, and improving quality. The main questions are no longer what or why, but where and how. This tutorial will teach participants about Domain-Specific Modelling and code generation, where they can best be used (and where not), and how to apply them effectively to improve your software development.

This tutorial introduces DSM and looks at how it differs from modelling languages like UML that focus more on the level of the code world. This is followed by real-life examples of DSM from various fields of software development. The main part of the tutorial addresses the guidelines for implementing DSM: how to choose where to use it, how to identify the domain concepts and formalize them into a metamodel, different ways of building code generation, and how to integrate generated code with legacy or manually-written code. Participants will have the chance to learn practical skills in language creation and modification exercises.

Exploratory testing is an approach to testing that emphasizes the freedom AND responsibility of the tester to continually optimize the value of his work. This is done by treating learning, test design, and test execution and mutually supportive activities that run in parallel throughout the project. That's different from traditional scripted testing, which focuses on accountability and decideability but usually at a much higher cost. Excellent exploratory testers are avid learners and questioners. They are at home under conditions of uncertainty.

They are able to generate new ideas quickly. Exploratory testing done well is much like a martial art. And it's best learned in much the same way-- by practicing.

Reap this unique occasion to learn about D programming language from two of its creators. D has a low barrier to entry for any n00b who knows how to code in other Algol-derived languages, but is original in many interesting ways. We'll spend the morning learning the basics and the afternoon playing with the amazing features that make D uniquely powerful. Bring your laptop along to participate to a D coding contest that may win you an advance copy of the book The D Programming Language. (Contrary to rumors, the book will hit the shelves this year, so you'll have two years to read it before the world ends.) Show up if l33t terms like compile-time function evaluation, eponymous templates, contract programming, or collateral exceptions pique your curiosity.

Keynotes

This talk is how I introduce myself to programmers. It is especially aimed at programmers who may wonder why any intelligent person would willingly be a tester, and why projects needs testers, and how to work with testers. I will talk about what makes skilled testers different and special and about the commitments I make to the programmers I work with. I will help you set a high, but reasonable standard for the testers you work with.

If I am successful, then by the end of my talk, at least a few programmers in the audience will have become testers.

It is often said that the difference between architecture and design is one of scale. Architects are concerned with "big" design and "big" integration. As developers become architects and architects become enterprise architects, the systems they design become ever bigger and more complex. But does big necessarily need to mean complicated?

In this talk Dan argues for a new appreciation of simplicity, using examples from systems analysis, enterprise integration, build and deployment, and provides strategies to help you extract the simple essence from complex situations and problems, and to distinguish the simple from the simplistic.

"I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity." - Oliver Wendell Holmes

The Roots of Scrum: How the Japanese Lean Experience Changed Global Software Development (Jeff Sutherland)

60 mins

Dr. Jeff Sutherland covers the history of Scrum from its inception thru his participation with Ken Schwaber in rolling out Scrum to industry, to its impact on Google, Microsoft, Yahoo, Oracle, Siemans, Philips, GE, and thousands of other companies. He describes the relationship of Scrum to experience at Bell Labs, MIT, iRobot, and the Grameen Bank, his communications with Kent Beck who used Scrum experience to help create XP, and how the Agile Manifesto accelerated Scrum adoption. Most important, he concludes by describing how team spirit is at the root of product innovation and hyperproductive teams and how that spirit can transform organizations.

Sessions

A Kanban System for Software Development provides an alternative means of creating an Agile Development process using Lean Thinking. Creating a Kanban System is not as simple as adopting a previously defined process as a starting point. Instead, a team needs to come up a model of its own process which will form the basis for further continuous improvement. This talk will introduce 5 steps that a team can use to create their own Agile process using a Kanban System for Software Development.

A Simple Matter of Configuration - how can we tame the complex world of configuration? (Roger Orr) {Slides}

90 mins

Configuration is a vital element of many programs. However it is often hard to get configuration right, leading to people wasting time and programs that do not work correctly.

In my experience explicit discussion of design options for the configuration of a program is rare and, all too often, the choice is made arbitrarily. I believe that looking at the usage patterns of the program early on helps to pick the best method(s) for configuring it and hence reduce the cost of problems caused by configuration issues.

Configuration is a complex subject and there doesn't seem to be an obvious single solution that works in every case, but we can try to fight against the common 'anti-pattern' of using multiple, unrelated, configuration techniques at the same time.

I will look at some of the issues surrounding configuration; firstly by trying to answer the "six key questions" (who, why, what, where, when and how) for this subject to help understand the size of the problem and the forces at work.

I'll then sketch out some possible design patterns and look at their trade-offs and interaction with the intention that I (and you) can reduce the pain of getting programs working in different environments by making more informed and deliberate decisions.

I am looking at the problem in the general case, using examples from various problem domains, and expert knowledge of a particular language or API is not assumed.

A Year of Misbehaving - A Retrospective on using BDD in a commercial enterprise (Mauro Talevi)

90 mins

So you've heard about Behaviour-Driven Development (BDD) and you're intrigued. You think it might just work but only in a very Agile environment. You ask yourself if it could work in a realistic commercial enterprise. We've done just that for the past year: we'll present a retrospective on using BDD on a high-profile project in a global corporate bank. We'll share the lessons learnt, including what worked well and what could be improved. We'll also introduce BDD and JBehave for Java-based development environments.

In this session, we follow the adventures of a software developer who unexpectedly finds himself in the role of head of quality assurance. Hilarity ensues as worlds collide and our protagonist realizes he doesn?t even speak the same language as his new colleagues.

This session will be exposing some of the gruesome details of real-life software testing. Courses for testers and shiny testing tools are readily available, but how widely are they actually used, useful, or even usable? Is there a need for a "professionalism in testing" initiative?

Also, while testing is widely accepted as a necessity, it isn't always seen as an integral part of the development process yet. So we will be looking into this and related (ongoing) struggles with management and development.

Lessons learned and a chance to exchange stories from the trenches will round off the session.

Apache Hadoop (http://hadoop.apache.org) is an open-source implementation of MapReduce, the algorithm famously used by Google to process large data sets. The Hadoop framework provides a reliable, scalable distributed environment for anlalysing massive amounts of data, and is used by companies from startups with a single server to Yahoo! whose largest cluster has 4000 nodes.

I will introduce Hadoop and the MapReduce programming model, with a discussion of how the Hadoop framework helps you process massive amounts of data in parallel across many machines. The session will cover subprojects of Hadoop, including the distributed file system HDFS and the distributed database HBase. Hadoop is written in Java, but I will show how the Streaming library enables developers to access the power of MapReduce from any language.

I will also introduce Pig and Hive, two subprojects of Hadoop that enable you to run complex queries using a high-level query language (Pig) or an SQL-like syntax (Hive). These tools enable ad-hoc querying and analysis of large datasets, generating the appropriate MapReduce jobs for you.

For software engineers creating new code is one side of the coin ,while checking its conformance, sustainability and quality represents the other side. Adding new features is considered a creative act, whereas improving quality is rarely respected but often neglected. If problems are built into a software architecture, it will be the more complicated and expensive to get rid of them the more we postpone appropriate activities. One way to avoid such design erosion is to regularly apply architecture refactoring. Architecture refactoring extends code refactoring to also cover architecture artifacts. The tutorial will introduce the core concepts of architecture refactoring, introduce refactoring patterns, and differentiate it with re-engineering.

This is one more mechanism to deal with concurrency in C++ programs, and it's considered a mechanism "for experts only".

Of course every decent C++ programmer considers himself an expert in concurrent programming, so they might consider to use these atomic objects and their operations.

But there's a reason why these components are considered "expert-level".

Their use can have some pitfalls, they might not be as atomic as a naive user might expect, and they might generally not do what a user expects from them.

This talk will present the C++ atomics and what niche in programming for concurrency they actually fill. Some examples on how to use them will be explained, and the talk discusses what programmers can expect from atomics and what not.

Intended audience:

This talk is for programmers and designers of projects that have to deal with concurrency and who consider using atomics.

Virtual Radiologic Corporation (http://www.virtualrad.com) is a provider of 24/7/365 teleradiology service for thousands of clients throughout the United States, associated with numerous radiologists in the US and overseas. Several years ago we began a project to write our own radiology PACS (Picture Archiving and Communications System) application to support and grow our core business. A key component of this system is an application used by radiologists to review medical images (e.g. CT, MRI, xray) for diagnosis.

Given the mission-critical nature of the software as well as the fact that it is under government regulatory control, we designed in a testing system and methodology that allows us to automate test executions and reporting through nightly builds. The automation system is built around a command-dispatching mechanism rather than the typical event-simulation mechanism which provides a variety of advantages even above and beyond the testing uses. Our experience demonstrates that designing for testability yields huge dividends in practice.

The case study will cover, briefly, the background of the application with a short demonstration of the types of functionality it supports, as well as an overview of the testing framework. This will be followed by an examination of the mechanisms (and code) that supports the command-dispatching system. Examples of how specific test cases are constructed will be demonstrated, as well as an exploration of the test execution framework. Finally, we will look at the support for an automatically generated Requirements Tracability Matrix which is used in our FDA regulatory processes.

In this talk we'll explore some techniques, tactics and strategies for building, maintaining and driving a better software team and the people behind it. what does it take to lead people, to drive them? what does it mean to be in a constant state of productivity? how do you create a super effective and creative team,even if you don't have an all-star team to begin with? what tools, best practices and techniques can and should a team use to deliver great software? from automated builds to managing people - we'll try to cover some hard lessons, in a fun way.

Automated testing is one of the corner stone practices for teams practising XP yet there are multiple levels of automated testing to understand. Combined with effective Continuous Integration, automated testing offers rich level of feedback, assuming you structure them in the right way. During this workshop, participants will better understand the tradeoffs you need to consider when looking at different levels of automated tests. Having learned these tradeoffs, participants will attempt to structure them in a way to maximise the speed and quality of feedback. We will then compare these with different examples from the real world to identify opportunities for improvement.

Participants will plan out their logical testing pipeline using lightweight modelling techniques (i.e. index cards and sticky notes!) to keep everything language and tool agnostic. We will then run a second iteration over the plan, layering a variety of tools and libraries on top to understand what it would look like in the real world.

Change, change, change - the 5 year evolution of an Agile team (Paul Field)

90 mins

Agile software development methods promise rapid delivery of valuable, high-quality software to the business. Do the aspirations of the Agile Manifesto stand-up in the day-to-day realities of an investment bank? How can you get the benefit of Agile techniques for your projects?

In this presentation, Paul Field introduces the concepts of Agile software development and shows how a variety of Agile techniques have been applied over the last five years on his projects at Deutsche Bank. In those five years the demands on the team and the types of projects have changed dramatically. Learn how the process evolved - what worked, what didn't and why.

Nothing beats a large codebase that have been worked on and cared for by hundreds of developers over many years, and that is still in good shape, and that can still be used to churn out one successful product after the other.

We have studied such a codebase; we have analysed and commented on actual changes done by professional programmers over the years. In particular we have paid interest to the small refactoring tasks and code cleaning activities that seems to be needed to keep the codebase in good shape - the small and "insignificant" changes that professionals do to avoid rot in the codebase.

There will be a lot of C and C++ code in this talk. We will focus on the small details of code cleaning. Be prepared for tough discussions about what really adds value or not to a codebase.

Compilers are fundamental to computer programming. Not only are fundamental programming techniques center stage in compiler design, but understanding how compilers are built will enable much more effective use of them. In this this session you'll learn how compilers are built from front to back.

When building an application that has persistent storage requirements, many of us reach for a relational database. However, that's not what everyone has been doing: there are now a handful of alternative database architectures that have been loosely gathered together under the NoSQL banner. They are all intended to handle large amounts of data in a scalable way, both in terms of the amount of hardware needed, and in terms of reducing the the difficulty of adding new types of data to the existing data corpus.

CouchDB is one of these: an open-source document-oriented database written in Erlang, it offers replication, scalability, fault-tolerance, and queries written as map/reduce functions in more-or-less the language of your choice (although JavaScript is the main one). In this session, we'll run through the architecture of CouchDB, work through some simple examples of what can be done with it, talk about performance and finally, and most importantly, look at whether it's more fun to work with than SQL...

Coupling, the number of things a unit of code depends on, is an important consideration when designing and maintaining software. Without due care a slow increase in coupling between units over a few releases of a software product, or even over a number of iterations within a release, can lead to software that is difficult to change, or worse results in a ripple effect throughout seemingly unrelated units. We cannot write software without some coupling, how else would the code achieve anything? Thus, coupling is not a bad thing in itself, it is the degree to which a unit is coupled to other units that can be undesirable, yet is not an absolute measure. Understanding of what coupling is, its many forms and how to recognise them is useful if we want to avoid unnecessary coupling in the software we create. It is important to control coupling in software if it is to be improved over a number of releases, as high coupling slows development, increases time to market and inevitably lost revenue.

This session attempts to explore coupling in all its manifestations, examine the difference between good and bad coupling and to consider its bedfellow cohesion. The session will also explore the techniques used before, during and after the act of design which can be used to reduce unnecessary coupling and as an aside looks at those which can lead to increased coupling. The intent of the session is to arm those designing and writing software with the understanding and techniques to create loosely coupled and maintainable software.

Agility asks for face-to-face communication, trust and collaboration. Proximity can be created by travelling - at least sometimes. Virtual communication channels provide another possibility overcoming the distance. But we should take the trust threshold into account which, once hit, will break an existing relationship.

In this session I'll reveal, avoiding this threshold, how to know if you're approaching it and explain the options once reaching it. Moreover I'll cover the advantages and disadvantages of synchronous and asynchronous tools. And finally I'll clarify who is travelling, where to, and for how long.

In the database world there are many buzzwords that most software engineers only hear about but never get experience with. This talk aims to clarify what lies behind some of these buzzwords and describe key differences from the more common transactional database. The talk will also provide enough insight for engineers to decide if any of these technologies are useful in their current or future projects.

The presenter has work for many years on the periphery of the databases, wondering about these buzzwords. He eventually got involved with data warehouses and now wants to share his experiences with fellow engineers.

Testing databases is not as easy as unit testing of classes and functions. Databases are full of state and internal logics which must be set up before testing can start. There are also lots of dependencies that are difficult to isolate or stub out.

This presentation will look at some techniques to create automated tests for databases and how to debug SQL code including single stepping stored procedures. We will automate database testing with the use of popular unit testing frameworks. We will test simple CRUD statements, calls to stored procedures and verify triggers. Test suites in Java and .Net will be demonstrated.

We will also see how a database can be developed with agile methods. Databases are traditionally developed up front as the database schema is difficult to change later.

Domain-specific languages offer an opportunity to raise the level of abstraction to specify the solution directly using domain concepts. In many cases the final products are generated from these high-level specifications. In this session we develop jointly a modeling language for a specific domain (such as medical device or interactive). We seek in collaborative group work for good design abstractions, capture them to a metamodel and define the language including constrains and concrete syntax. At the end of the experiential session we try the language to model some applications.

Tools like JUnit and CPPUnitLite have helped automated unit testing become a popular practice in many modern software development projects. However, when developing multicore programs it is much more difficult to introduce such testing and related practices like Test Driven Development into a project. This is because adding parallelism into a program introduces new classes of errors which simply don't exist in the sort of single-core programs that these practices traditionally target. We start our presentation with a short explanation of "What is Automated Unit Testing and Test driven development" and then try to answer the fundamental question "Where is the best place in the development cycle to test parallel code?". We also discuss different methods of detecting, tracking and testing for errors which are unique to mulitcore programs. Although, much of the work described here is implemented in C++ native code, many of the findings will also be of interest to programmers of managed code. The presentation is primarily aimed at people who want to know how automated unit testing might be introduced into their multicore development environment.

This is an (much!) extended version of my lightning talk from the ACCU 2009 conference.

What is a character? What is Unicode? Why should my program care? How can I make my program handle it? In this session, we will look for the answers to these questions and more. We will look at a few of the hundreds of available encodings: the ways of representing character data for processing, transmission and storage. We will examine the pros and cons of some of the encodings and look at the process of converting between different encodings, including UTF-8, UCS-4 and ASCII.

We'll look at some of the issues that arise when applying various types of processing to character data. How do we know that a given character is upper case? If it is, then how do we convert it to lower case? How do we know that a character is a numeric digit? We'll take a look at case conversion and string comparison.

Once we know how to process the character data inside the program, how do we go about ensuring that we can display the character appropriately? We will look briefly at the terminology surrounding fonts, including typefaces and glyphs as well as covering the mechanisms by which a character is mapped to a glyph or to multiple glyphs and even how a sequence of characters may be mapped to a single glyph!

Recently Java enterprise web application programming has been leaning towards a more classical J2EE approach. Traditional Java Server Page (JSP) programming, and even libraries such as Struts, are being replaced by new AJAX libraries that make GUI programming more straight forward, robust and easier to unit test.

In this session I will look at what an enterprise web application is.

I will demonstrate how to develop a more robust GUI with an AJAX library and how to create a more object orientated Data Access Layer (DAL) with an Object Relational Mapping (ORM) library.

After defining what an enterprise web application is I will move on to demonstrate how to create a DAL, with real code examples, and explain how to use a registry to abstract away Data Access Objects (DAOs) so that the real DAOs can be used in production and integration testing while seamlessly substituting mock objects for unit testing.

Then I will look at an AJAX library and demonstrate how to create the presentation layer in Java, again with real code examples, and make Remote Procedure Calls to access the DAL. I will then look at how to integrate the AJAX library into traditional Spring MVC in order to tap into the vast library of functionality that the Spring Framework can provide for web based enterprise application. I will explain how the tools provided by Spring make integration testing of DAO objects very simple.

Finally I will look at how to use Spring Security to authenticate users of the application and secure individual Remote Procedure (RPC) calls made from the client application, running in a browser, to the server.

This session should be really fun. We will make a deep dive into findbugs, a very useful tool which every programmer should use to test Java code against potential bugs, defects and antipatterns.

After a first overview of the problem domain (static analysis), we will take some of Joshua Bloch's "Effective Java" Rules from his book (a ACCU mentored developer mailing lists currently runs...) and see how / if they can be implemented in a tool like findbugs. I will show how findbugs works internally and we will implement a new bug detector (findbugs rule) to find specific defects we are interested in.

Additionally we cover different levels of static analysis like code-, design- and architecture-level. Interestingly findbugs checks code and design level, but fails itself (!) on the architecture level; which means the tool itself suffered major architecture erosion from version 0.7 to the current 1.3.8.

I am participating now in the findbugs project and contributed several architectural refactorings on the findbugs 2.0 branch (see the sourceforge/googlecode SVN repository) which is currently under development and should hopefully be finished during the time of the ACCU 2010 conference. We will see, how far an automated and tool supported approach can lead to better results and how tools like findbugs can discover areas of code- and design-erosion and suggest improvements. Additionally we will look at the next generation of analysis which is architecture analysis.

The title says it all! If it's hard to write unit tests, or they take too long to run, if your plugins need the whole application for their distribution, if you can't (re)use a bit of your colleague's code without importing the entire team's work, if your multi-threaded code performs better when you run it sequentially, or you've got 5 versions of the same 3rd party library littering your source repository, then you're suffering from this. Or perhaps just part of it...

Modern languages provide us with many tools for creating beautiful, modular, general, flexible and simple abstractions, yet it seems they give us even /more/ tools for writing ugly, monolithic, specific, rigid and complicated, er, /concretions/.

With examples of both kinds in C#, Python, C++ and maybe even C++0x (if it stays still long enough to get the syntax right :-)), this is a talk about our (code's) propensity for wearing too many hats.

The standard C++ library introduced generic algoritms on homogeneous sequences. C++0x introduces the tuple class template for inhomogenous sequences but neither provides generic algorithms nor use-cases for this powerful class template. This presentation explores some of the power yielded by tuples and shows tools easing the use of tuples like algorithms on tuples and enable_if. In addition it goes over some of the extensions voted into C++0x which relate to generic programming like variadic template and the enhanced functional support, notably bind().

The goal of this presentation is to demonstrate how to create a library component operating on a variety of structures: The details are exposed to the library using tuples. The library automatically adapts its behavior to the exposed structure at compile time. This is demonstrated with some simple example employing the various techniques. The implementation of such a library is heavily using templates with the goal of exposing a simple and type-safe interface to the user.

Google released the Googletest library for C++ in mid 2008 and it joined a long list of Unit Test frameworks. The developers had concentrated on making the framework powerful and highly portable and released it under a very relaxed open source license. The library is well thought out, and easy to pick up by anyone familiar with other popular unit testing frameworks. In early 2009 they followed this by releasing the Googlemock mocking library, which is the icing on the cake. C++ developers now have access to a library that provides many of the mocking features that users of languages that provide reflection have had for years.

This session will give you an in depth tour of these libraries, pointing out their strengths and shortcomings. Familiarity with C++ is a precondition, but no knowledge of Unit Testing frameworks or TDD is necessary.

The history of recent computing has seen many attempts to overcome the free lunch is over problem, i.e. how existing software can benefit from the upcoming hardware performance improvements. Especially graphic card designers have pushed this to one extreme - the programmable Graphic Processor Unit or GPU has evolved into a highly parallel, multithreaded, manycore processor with tremendous computational horsepower and a very high memory bandwidth, which is probably todays most powerful computational hardware for the dollar. Such consumer hardware turns nowadays every home computer into a supercomputer with a performance currently reaching 2.7 Tflops, i.e. 177 times the performance of the Deep Blue. In addition to that is the field of GPGPU computing maturing. Two standards are currently emerging: one of them, OpenCL, is already integrated into Apple's Snow Leopard OS, the second, Direct Compute, on the other hand will follow in Windows 7.

But how do you program these processors? How does programming it differ from normal programming? What problems are suited for the GPU? Which are not? This talk will give an introduction into the field of GPU computing. After giving an overview of its inherent special programming model some typical techniques and solutions will be presented. It will be shown how these techniques lead to a program that can run on any number of processor cores and will scale with future hardware automatically. Furthermore current trends in creating higher-level constructs (like the STL-mimicking Thrust library) and a short on-look of the two vendor-independent standardization solutions will be given. Last but not least a demonstration will show the potential of these highly parallel manycore processors. The talk will be based on the currently most popular and most widely used free SDK, CUDA (an acronym for Compute Unified Device Architecture). But the given techniques and solutions apply to all available SDKs and upcoming standards.

This session contains a series of nine short hands-on exercises that show different ways of creating parallel code. This session is ideal for those who are interested in programming for multicore but just haven't had chance to experiment. Each exercise should take less than 7 minutes to complete, and requires just a modest knowledge of C\C++. Examples will include both task parallelism and data parallelism. The workshop will use multicore laptops that have been preconfigured with the appropriate development tools. Although the development environment will be LINUX based, the majority of the exercises can be readily ported to a Windows based system.

[From Stephen] Max 20 people (subject to room availability) - session could be repeated if popular.

Importance of Early Bug Detection for Improving Program Reliability and Reducing Development Costs (Sergey Ignatchenko) {Slides}

45 mins

The longer the bug is allowed to exist within software development lifecycle, the more expensive it becomes, and the dependency is exponential. It includes both much longer times to detect rarely occurring bugs, and potential need to rewrite much bigger portion of code to fix it.

Three special cases of the most difficult to find bugs: multithreaded bugs, security bugs and 3rd-party library bugs. Some of these can be found only by code review.

Conclusion: while no means of bug detection should be neglected, way too often improving program reliability is understood only as testing, while there are lots of ways to improve it and to reduce number of bugs at much earlier stages in software development lifecycle, at much lower cost both for developers and for end-users.

Introduction to Scrum: Shock Therapy -- How new teams in California and Sweden systematically achieve hyperproductivity in a few sprints (Jeff Sutherland)

90 mins

New teams need to learn how to do Scrum well starting the first day. This talk will describe how expert coaches at MySpace in California and Jayway in Sweden bootstrap new teams in a few short sprints into a hyperproductive state. This requires new teams to do eight things well in a systematic way. Good ScrumMasters with make sure their teams understand these basics for high performance and great ScrumMasters will make sure the teams execute all of them well. This session will review the critical success factors for new Scrum team formation. Click here for Shock Therapy IEEE paper http://jeffsutherland.com/scrum/SutherlandShockTherapyAgile2009.pdf.

What happens when a project has a hard deadline looking and requirements set in concrete? Testing gets cut of course. This case study examines the unorthodox approach that this Project Manager took to minimise risk while cutting test time to a minimum. Will her project go live on time? Will her software work? Will the client be satisfied? Will the Project Manager get paid? What lessons can we learn about managing expectations?

Agile has long shunned up-front design. When Agilists force themselves to do up-front work, it usually is limited to a symbolic use of User Stories for requirements and metaphor for architecture, with much of the rest left to refactoring. Experience and formal studies have shown that incremental approaches to architecture can possibly lead to poor structure in the long term. This tutorial shows how to use domain analysis in a Lean way to build an architecture of form that avoids the mass of structure that usually accompanies big up-front design, using only judicious documentation. It will also show how architecture can accommodate incremental addition of features using Trygve Reenskaug's new DCI (Data, Context and Interaction) approach, and how it maps elegantly onto C++ implementations. The tutorial is based on the forthcoming Wiley book of the same title.

Each session of Lightning Talks is a 45 minute sequence of five minute talks given by different speakers on a variety of topics. There are no restrictions on the subject matter of the talks, the only limit is the maximum time length. The talks are brief, interesting, fun, and there's always another one coming along in a few minutes.

We will be putting the actual programme of talks together at the conference itself. If you are attending but not already speaking, but you still have something to say, this is the ideal opportunity to take part. Maybe you have some experience to share, an idea to pitch, or even want to ask the audience a question. Whatever it is, if it fits into five minutes, it could be one of our talks. Details on how to sign up will be announced at the event.

In this talk, I will describe the principles behind Erlang-style Concurrency - what problems it was designed to solve, and how it fundamentally changes the way you go about structuring your programs. I will illustrate how to achieve great scalability on multicore and in compute clouds, without sacrificing clarity or your own sanity.

Objects do not live in a free society: they exist for a purpose; they are not created with equal rights; they should not aspire to equality. What this means in practice is that objects live in a class-ridden society where each class serves a different role in the program as a whole. One category of object in need of attention and liberation is for representing domain values. Values are fine grained and informational. They are inherent in the problem domain, but are often flattened into little more than plain integers and strings in the implementation, weakening the correspondence of the code to the situation it addresses.

The Value Object is often cited as the pattern to employ, but when it comes to identifying and implementing the Value Object pattern, there is a great deal more to be taken into consideration than can reasonably fitted into a single pattern. This talk looks at the practices and concepts that surround values and their implementation in object form across different languages.

Intended Audience: Programmers The Multicore Revolution continues apace, but is this just a hardware activity at present, is software being left behind (again)? Fortran, C and C++ (with OpenMP and MPI) are seen as the standard languages of parallel programming because of the history of use of parallelism in high performance computing (HPC).

However even these are changing, cf. Threading Building Blocks (TBB) and other initiatives. Are imperative languages just evolving into declarative ones?

Erlang has long been a language for parallel programming. OCaml is being used quite a lot. Haskell is seeing increasing use. Is functional programming finally going to be important?

In this session we will take a look at how and why these functional languages are beginning to take the world by storm. What is it about functional languages that make it imperative to use them in a parallel programming context?

Intended Audience: Programmers Java has had multithreading since its inception: it was developed towards the end of a strong period of research into parallelism 1986--1994, and chose to integrally incorporate threads into the language.

Initially the threads were just concurrency tools for programs running on a single processor. As systems became multiprocessor ones, and indeed multicore ones, and threads were used as a tool by operating systems to harness multiple processors and cores, Java fell into being a parallel processing language without really trying -- the JVM naturally harnesses all processors and cores available using native kernel threads.

Over time though, it has become patently apparent (but there is no patent on it) that shared memory multithreading is the problem and not the solution to the parallelism that is now the norm due to the Multicore Revolution. JSR166 came and was agreed, leading to java.util.concurrent. JSR166x and JSR166y augmented this with lots of new goodies for Java programmers: this introduced Futures and Parallel Arrays which are great tools for harnessing parallelism without all the hassles of synchronization issues with threads.

There are however other things happening that could be really important. Enter: Scala, Groovy and Clojure.

Instead of threads, programmers use actors, dataflows, CSP, software transactional memory, agents. Scala introduced the Actor Model as its main vehicle for parallelism. GParallelizer brings not only the power of Groovy as a coordination language to the JVM, it allows for a raising of the level at which applications programmers work with java.util.concurrent -- Actor Model, Dataflow, and soon CSP are supported. Clojure is showing software transactional memory as well as agents and other techniques based on JSR166.

All these ideas have been around for years, but now that parallel programming is the norm and not the province of high performance computing (HPC), they are becoming Very Important. Using these message based models of computation, synchronization problems are more or less non-existent. Deadlock and livelock are not impossible, but they are nearly so.

This session will be a tour through all these, showing why the JVM is the platform of choice for all discerning applications programmers.

Traditional testing, as a means towards quality assurance, is far too costly and far too ineffective. There are much smarter ways to approach software quality. This session will argue with facts for a half dozen more-cost-effective ways to get reliability and all other qualities, like usability, security, adaptability, into your systems, than conventional testing.

The world is full of functionally rich, slow to build, hard to maintain, C++ systems. Some of these have been developed over time by many and varied hands. They continue to exist because they provide valuable functionality to the organisations that own them. To maximise their value it is necessary to provide interfaces to today's popular application development languages, and make it possible to continue to develop them in a responsive and effective manner.

This is the story of one such system, the problems it presented and the approach taken to addressing these problems.

While I'll have slides to guide discussion and tell this particular story, I will also encourage the audience to share their experiences during the session.

Sometimes you see code that is perfectly OK according to the definition of the language but which is flawed because it breaks to many established idioms and conventions of the language. This will be an interactive session with discussions about good vs bad C++ code. We will discuss simple C++ idioms and coding conventions, but we will also touch upon best practices when working with C++ in large codebases with lots of developers with mixed skills.

This talk will look at the most common architecture patterns in Erlang-based products, describing how each of them solves a particular problem while guaranteeing no single points of failure. It will start with the early versions of the AXD301 switch and end with examples of the use of Erlang in cloud computing architectures.

Working with very large data sets, only a few years ago the monopoly of a few companies (such as Google, Walmart, Yahoo, or Morgan Stanley), is becoming increasingly commonplace. Dealing with massive quantities of data on parallel computational networks shifts usual design tradeoffs substantially: operations that are traditionally considered cheap become prohibitive, and algorithms that seem ungainly become life savers. Andrei shares from his experience on working on large data sets with his doctoral work and six months of doing Natural Language Processing research for Faceboook.

We present our experience applying "system-test first" test-driven development (TDD) in the development of large systems and systems-of-systems. We try to address integration and system testing as early as possible. The sooner the system is in a deployable state, the easier it is to react to changing business needs because we can deliver new features to users as soon as is deemed necessary. We therefore start by writing tests that build and deploy the system and interact with it from the outside and, to make those tests pass, add code to the system in the classic "unit-test first" TDD style.

Many teams applying TDD start writing unit-tests and leave integration and system testing until late in the project because they find it difficult to write tests that cope with the distributed architecture and concurrent behaviour of their system. We will describe how we address common pitfalls, including unreliable tests, tests that give false positives, slow-running tests and test code that becomes difficult to maintain as the system grows.

We will also describe how writing system tests guides our architectural decisions. Writing unit tests first guides design of the code to be flexibile and maintainable. In a similar way, we have found that writing system tests first guides the architecture to be easier to monitor, manage and support.

This presentation is a strongly C++ based look at the challenges and techniques for tackling problems with memory usage, including address space shortage and fragmentation.

When an application works with the small, easy to run test data set that the development team likes to use but goes very bad in production it presents a testing dilemma. How can we verify that our application will cope with large volumes of data without incurring the expense of very long running production tests?

We need to assess whether the production data is a reasonable size and shape for our application to handle.

If our application should be able to cope but isnt, what is going wrong? Is it crashing? Is it stuck in an infinite loop? Does it just look like its stuck and is actually just being painfully slow?

First we explore some techniques for assessing memory usage patterns, both using features available in C++ and also using basic tools available on popular operating systems.

Then we investigate some of the ways in which different memory allocation and usage patterns can have profound effects on the overall performance of an application.

Finally we look at some tricks for simulating bad memory situations cheaply and easily so that we have a good chance of quickly detecting when parts of our application have bloated and will fail to provide the performance that will be required for them to cope in a constrained memory environment.

Although Test-Driven Development (TDD) has become the standard practice to develop software, there aren't so many tools available to write unit or integration tests for XSL Transformations, and none for the development of XML Schemas. TestXNG is an effort to fill the gap, and in contrast to the few tools that are available for writing unit tests for XSL Transformations, it is not based on JUnit but inspired by TestNG.

TestXNG tests are specified in XML documents, and each such document can contain one or more assertions. For XML Schemas, TestXNG can be used to assert the validation or non-validation of some XML source code against a given schema. For XSL Transformations, it can assert the correct transformation, but later versions will also be able to include parameters and modes, test direct calls to templates or the presence of keys. One effect of using TestXNG, and completely in accordance with other TDD experiences, is that the resulting code is much more modularized and thus also more comprehensible as a consequence of making it testable.

TestXNG is implemented using Java and Maven, available as an Open Source project at https://sourceforge.net/projects/testxng/, and work in progress.

The Mechanism of Antikythera is an astronomical calculator from the first century B.C. Its currently agreed-on model consists of 35 gears. Its back face contains four dials tracing a luni-solar calendar and an eclipse prediction table. A number of interlocked gears calculate the ratios required for moving the four dials. The front face shows the sun's and the moon's positions in the zodiac. The elliptical anomaly of the moon is calculated by advancing one gear eccentrically through another and mounting that assembly on a gear rotating according to the moon's long axis precession period. The mechanism's design eerily foreshadows a number of modern computing concepts from the fields of digital design, programming, and software engineering.

The talk will briefly go over the mechanism's provenance and the modern history of its study, focusing on recent findings that an international cross-disciplinary team of scientists obtained through surface imaging and high-resolution X-ray tomography. The talk will offer a detailed explanation of the mechanism's operation by presenting a Squeak EToys-based emulator that is built and operates entirely on mechanical principles.

My book about the C++ Standard Library is 10 years old now. But a new version of the Standard is coming, C++0x, so I am preparing a new edition of the book. Unfortunately, I have no clue yet, what was added, because in recent years was was dealing with SOA rather than with C++. I only know, while finally concepts were removed, a lot of other stuff was added. So, what I can announce for this talk is simple: I will present all stuff I will learn during the next months, which from my point of view will be noteworthy new features of the C++0x Standard Library. You might visit this talk to learn about these new features, hear about some strange or funny pitfalls, or just to correct me because you are more of an expert than I am. It's up to you ;-)

A test case is a container. Counting the containers in a supermarket would tell you little about the value of the food they contain, but the situation is much worse for testing, since test cases are much more variable and less comparable. Pass/Fail rates mean nothing, and test case artifacts cant contain or duplicate the value of a skilled tester. I stopped managing testing using test cases in 1992, and switched to test activities, test sessions, risk areas and coverage areas. This talk is about how you can do that, too, and why you should.

In the Agile era, unit testing has been over-emphasised to the detriment of other forms of testing. As Agile has matured and the software industry are starting to move beyond pure Agile, we are now rediscovering the importance of other software quality practices. Quality is not free and there is a cost to implementing fully automated testing systems both economically and socially. I'll be looking at some cases studies of teams that have walked the road towards Agile testing and evaluate their decisions and compromises they made in order to release their products and how this has affected business value and team culture. I shall also demonstrate how metrics were used to measure business impact of both positive and negative actions.

All of a sudden we seem to be inundated by hordes of new languages, each competing for the "next big thing" award. There's Python, Ruby, Scala, Clojure, F#, and then of course all the new standards for C++, Java, and C#. What should we think about all this? Which language will win? Will _any_ language win? And how does all this tie into the craftsmanship/professionalism movement?

I start with a brief description of my current work leading the Software Assurance Metrics And Tool Evaluation (SAMATE) project. The body of the tutorial is how automated static analysis can help in the software development process.

Quality must be designed into and built into software. Nevertheless testing (dynamic analysis) and static analyzers have roles in delivering excellent software. We begin by describing what static analysis is (and isn't) and compare it in general with testing. We describe the dimensions of static analysis: universality, rigor, and subject matter, and concepts related to static analysis and their results. We review the state of the art in static analyzers, citing recent results from the 2008 and 2009 Static Analysis Tool Expositions and other work in the United States. Finally we present suggestions on how best static analysis might be incorporated into software development.

Not sure about something? And that something affects the detailed design, an architectural decision or choice of functionality? Does that feel like a problem or a part of the solution?

There is a strong tendency for humans to feel unsure about uncertainty, in two minds over ambiguity and a little wobbly with instability. Whether over technology choice, implementation options, requirements or schedule, uncertainty is normally seen as something you must either suppress or avoid. Of this many people appear, well, certain. That you should embrace it and use it to influence schedule, identify risk and inform design is not immediately obvious. A lack of certainty offers the opportunity to highlight risk and reframe questions, making uncertainty part of the solution rather than necessarily a problem.

The Unix tools were designed, written, actively used, and refined by the team which defined the modern computing landscape. Although by the standards of shiny IDEs some may find the interface of these tools arcane, their power remains unmatched. Their versatility makes them the first choice for obtaining a quick answer and the last resort for tackling in minutes large, specialized, and difficult problems. Compared to scripting languages, another great productivity booster, Unix tools uniquely allow an interactive, explorative programming style, which is ideal for solving efficiently many programs that we developers face every day. Natively available on all flavors of Unix-like systems, including GNU/Linux and Mac OS X, the tools are nowadays also easy to install under Windows.

We start with a brief introduction to the key ideas behind the Unix tools, their main advantages in todays computing landscape, and the factors affecting the choice between a pipeline of individual tools, a scripting language, and a compiled language. Many one-liners that well build around the tools follow a pattern that goes roughly like this: fetching, selection, processing, and summarization. We review the most important tools for each phase, the plumbing that joins the parts into a whole, and more specialized commands for handling software development tasks and their visualization. The examples well see involve everyday problems we face as developers. We end with a discussion of common patterns and anti-patterns.

Much attention has been paid lately to developer-level testing, either as unit tests or TDD. In comparison little attention has been paid to system test and how what developers and designers do affects the ease (or not) of system test. This talk will use a number of examples and case studies to illustrate points about how developers can help get their products shipped by considering handling of state, concurrency, time, scaling, logging, error handling and other system-level issues early on in the process.

On traditional projects, software developers are often presented with volumes of requirements that seem to be definitive and yet are full of holes. On agile projects, the whole team gets involved in establishing user stories on index cards. Come along to this session to understand the life-cycle of a user story from inception to implementation. You'll get a chance to practice working through some real user stories. We'll also investigate ways that agile teams handle requirements that don't seem to be user stories.

The multicore revolution is upon us; the free lunch is over. CPU manufacturers are adding more cores rather than increasing the clock speed, and modern CPUs often have lower clock speeds than a year ago. In order to get maximum performance from our software we need to take advantage of these cores --- we need to add concurrency to our applications.

In this presentation I'm going to talk about ways of doing this. Using the concurrency facilities from C++0x as a basis, I'll cover ways of wringing parallelism and concurrency out of existing programs, and ways of thinking about problems to make this process easier with new programs. I'll also cover some common pitfalls and give practical advice for testing concurrent programs.

Large software components - content management systems, workflow engines, ERP systems, CRM tools - promise much, but often seem to deliver less and with great difficulty. There sometimes seems to be some distance between what the product claims to do, and what it actually does. Sometimes it almost does what you need, but not quite.

It may present a myriad of possibilities, but give no guidance as to the best path. Alternatively, it may promote one true way and punish those who need to take the highways and byways. However, all these types of systems have one thing in common - you have no choice but to use what's in front of you.

Recently, Practical Law has replaced a large web application developed in-house over many years with an equivalent system developed atop FatWire Content Server, a 3rd-party content management system of precisely the category described above. The project went live on 6 September, having slipped 5 weeks on a 9 month plan.

This talk is not about how wonderful Fatwire is, or how wonderful we at Practical Law are. Nor can it examine the reasons why the previous application was obsoleted, or how FatWire became the substrate for the new development. Instead, we want to look at the obstacles presented by our content management system and of our own devising, and how we overcame them. From that we'll try to draw lessons that implementators on and (this may be wishful thinking) vendors of large software systems might do well to learn.

In our component-based development methodology, virtually all of the software we design is rendered as components. When we say component in C++ we are referring to a .h/.cpp pair (of files) satisfying certain well-established, objective physical properties. Moreover, each of our developers is responsible for ensuring the correctness of the software he or she creates. Hence, along with each component developed, we require a standalone test driver to verify that all essential behavior implemented within that component behaves as advertised i.e., according to the contract delineated in its component, class, and function-level documentation. In this very practicable talk, we will review the various categories of common classes (e.g., utilities, mechanisms, and value-semantic types). We will also review the basic principles of testing, various methods for systematically selecting test data, and the (four) primary implementation techniques for writing test cases. We will then discuss the organization of our self-contained (and delightfully light-weight) component-level test drivers. The substantial remainder of the talk will address (1) the details of how effective individual test cases are conceived, documented, and implemented, (2) how these test cases can be profitably ordered to leverage already proven component functionality, and (3) how similarities within the various class categories naturally lead to effective reusable testing patterns.

The dragon of numerical error is not often roused from his slumber, but if incautiously approached he will occasionally inflict catastrophic damage upon the unwary programmer's calculations.

So much so that some programmers, having chanced upon him in the forests of IEEE 754 floating point arithmetic, advise their fellows against travelling in that fair land.

In this session we shall briefly explore the world of numerical computing, contrasting floating point arithmetic with some of the techniques that have been proposed as safer replacements for it. We shall learn that the dragon's territory is far reaching indeed and that in general we must tread carefully if we fear his devastating attention.