Lars Knoll

Matt Godbolt

О мероприятии

CppCon is the annual, week-long face-to-face gathering for the entire C++ community. The conference is organized by the C++ community for the community. You will enjoy inspirational talks and a friendly atmosphere designed to help attendees learn from each other, meet interesting people, and generally have a stimulating experience. Taking place this year in the beautiful Seattle neighborhood of Bellevue and including multiple diverse tracks, the conference will appeal to anyone from C++ novices to experts.

We – attendees at CppCon – are all teachers. Some teach for a living; many occasionally teach a course or give a lecture; essentially all give advice about how to learn C++ or how to use C++. The communities we address are incredibly diverse. What do we teach, and why? Who do we teach, and how? What is “modern C++”? How do we avoid pushing our own mistakes onto innocent learners? Teaching C++ implies a view of what C++ is; there is no value-neutral teaching. What teaching tools and support do we need? Consider libraries, compiler support, and tools for learners. This talk asks a lot of questions and offers a few answers. Its aim is to start a discussion, so the Q&A will be relatively long.

Engineering is programming integrated over time. That is to say, as much as it can be difficult to get your code to build and run correctly, it is manifestly harder to keep it working in the face of changing assumptions and requirements. This is true no matter the scale, from a small program to a shared library. Only two solutions have been shown to be theoretically sound: never change or provide no compatibility guarantees. What if there were a third option? What if we took the question of maintenance out of the realm of theory and moved it to practice? This talk discusses the approach we've used at Google and how that intersects with other languages, package management, API and ABI compatibility, and a host of other software engineering practices. The particulars of C++ as a language and an ecosystem make it well positioned for a different approach: Live at Head.

Two years ago, I started to focus on exploring ways that we might evolve the C++ language itself to make C++ programming both more powerful and simpler. The only way to accomplish both of those goals at the same time is by adding abstractions that let programmers directly express their intent—to elevate comments and documentation to testable code, and elevate coding patterns and idioms into compiler-checkable declarations. The work came up with several potential candidate features where judiciously adding some power to the language could simplify code dramatically, while staying true to C++'s core values of efficient abstraction, closeness to hardware, and the zero-overhead principle. The first two potential candidate features from that work to be further developed and proposed for ISO C++ are the ＜=＞ unified comparison operator (minor) and what I've provisionally called "metaclasses" as a way to generatively write C++ types (major). This talk is about the latter, and includes design motivation, current progress, and some live online compiler demos using the prototype Clang-based compiler built by Andrew Sutton and hosted at godbolt.org.

Qt is one of the largest and most widely used C++ frameworks. It is fully cross-platform, covering all functionality required to develop advanced graphical applications. The talk will go through important parts of Qt's history from it's roots to what it is today. We will have a look into the relation between Qt and C++, some of the design philosophies driving the evolution of Qt. I'll go through the current state of the frameworks, latest releases, ongoing development focus, and give an outlook into the future.

In 2012, Matt and a colleague were arguing whether it was efficient to use the then-new-fangled range for. During the discussion a bash script was written to quickly compile C++ source and dump the assembly. Five years later and that script has grown into a website relied on by many to quickly see the code their compiler emits, to compare different compilers' code generation and behaviour, to quickly prototype and share code, and investigate the effect of optimization flags. In this talk Matt will not only show you how easy (and fun!) it is to understand the assembly code generated by your compiler, but also how important it can be. He'll explain how he uses Compiler Explorer in his day job programming low-latency trading systems, and show some real-world examples. He'll demystify assembly code and give you the tools to understand and appreciate how hard your compiler works for you. He'll also talk a little about how Compiler Explorer works behind the scenes, how it is maintained and deployed, and share some stories about how it has changed over the years. By the end of this session you'll be itching to take your favourite code snippets and start exploring what your compiler does with them.

Compile-time constraints will likely soon become part of our routine C++ programming vocabulary. Why? Such constraints are induced by new core language features (requires-clauses and requires-expressions) that are on the horizon for C++. What are these all about? Almost every function imposes requirements on its users; violating those requirements typically leads to incorrect programs. Historically, such requirements had to be expressed in comments or other documentation, as there was little machinery to express them in code. Soon we will be able to express more requirements in code, thus allowing compilers to detect and address more violations. This talk aims to prepare both new and veteran C++ programmers with the necessary background, tutorial information, and advice to exploit this powerful new supplement to function declarations. A case study, illustrating an unexpected gotcha, will conclude the presentation.

When writing a C++ program, we tend to think of the strengths and weaknesses of our computer, just as we think of our algorithms, data structures, and probably of language features we want to use (or we want to avoid), and we code accordingly. To some, it might be surprising to learn that C++ is actually specified in terms of an abstract machine, with its own characteristics. If this is indeed a surprise for you, then you might be interested in knowing more about this machine. It's been there for a long time, and it influences the way we program as well as the way the language was, and is. The aim of this talk is to provide a practical overview of what the C++ abstract machine is, how it affects the way we program and how it affects language design itself. It will probably most interesting to intermediate audiences who would like a closer look to some of the abstract underpinnings of the language.

Visual Studio 2017 was released this year and brings all sort of new functionality for C++ developers on any platform, not just Windows. In this talk, we'll cover many of the new features of the latest update of Visual Studio 2017 and give you a preview of new features coming in a major update later in 2017 that we've never shared before since you came to visit our hometown! We'll cover the ongoing evolution of our compiler and libraries, giving you an update on our conformance work as well as performance, and build throughput. We'll talk about the new enhancements to our Linux targeting. We'll talk about brand new unit testing capabilities for major test frameworks. We'll talk about improved support for CMake and our Open Folder experience for getting full Visual Studio IDE support for code that doesn't have a project or solution file. We'll also walk you through some cool new productivity and debugger features.

We often talk about how new language features can help developers to write more accurate and concise code. There is another type of discussion to be had on how tools help leverage language issues and support developers. How about quite a third perspective? Which is about how language can help tools to do better. As C++ tools vendors, we use to share our experience with C++ language trickiness and peculiarities, including preprocessor and non-trivial parsing. It’s time now to talk about the view on the upcoming language changes through the IDE’s glasses. In this talk I’ll identify the most important issues with the language from the IDE’s perspective and will show how new language standards, as well as other initiatives like C++ Core Guidelines, are helpful and beneficial to the IDEs. I’ll cover a variety of features from if constexpr to Concepts and Modules, as well as std2 and some other initiatives that are no more than proposals at this point. Come and see the language from our perspective.

Amongst the loud fanfare of C++11 arrived this quiet little gem of ＜system_error＞, with std::error_code and std::error_condition born from the heart of boost::asio. With Committee input they evolved for inclusion into the C++11 Standard, providing consistent and extensible ways for applications to expose platform-specific error-codes, platform-independent error-conditions, and rely upon an open-ended design that permits future extensibility for message reporting; and even internationalization of logs and user-facing events and conditions. More than half a decade later, we most unhappily find that the motivation and intended use model for std::error_code and std::error_condition are still not well understood; even in circles eagerly embracing features and idioms encouraged by the latest C++ Standard revisions. This may be somewhat expected, as all things “error” tend to permeate system-wide design-decisions and influence the processing metaphor; how algorithms compose conditional success-or-failure branching; and create consistency challenges across APIs (both internally, and for interoperation with third-party libraries). We discuss the features and design of ＜system_error＞ and its intended use; make recommendations regarding API design; and propose possible extension mechanisms for module-specific concerns, instance-specific messages (with embedded value-reporting), and internationalization.

If you’re looking for a fast and lightweight code editor, Visual Studio Code has you covered. Come get an overview of Visual Studio Code along with the C++ extension that enables editing, building, and debugging your C++ code across Windows, Mac, and Linux.

We examine how the increasing complexity of language features related to interfaces in modern C++ has somewhat surprisingly produced increasing simplicity in the interfaces themselves. One of the major reasons for this emergent simplicity is common use of “substitution failure is not an error” or SFINAE in interface design. Appropriate use of SFINAE allows the production of “do what I mean” or DWIM interfaces that allow experienced designers to embed their judgement in interfaces. Most of the presentation will consist in examination of practical examples of SFINAE in interface design and development of a simple toolkit that automates construction of compile time template predicates. Abstract syntax trees are evaluated at compile time to enforce complex constraints on types in the SFINAE context.

Coroutines are coming. They're coming for your asynchronous operations. They're coming for your lazy generators. This much we know. But once they're here, will they be satisfied with these offerings? They will not. They will require feeding, lest they devour our very souls. We present some fun ways to keep their incessant hunger at bay. I, for one, welcome our new coroutine overlords. The Coroutines Technical Specification is an experimental extension to the C++ language that allows functions to be suspended and resumed, with the primary aim of simplifying code that invokes asynchronous operations. We present a short introduction to Coroutines followed by some possibly non-obvious ways they can help to simplify your code. Have you ever wanted to elegantly compose operations that might fail? Coroutines can help. Have you ever wished for a zero-overhead type-erased function wrapper? Coroutines can help. We show you how and more.

constexpr: in C++11, a curiosity; in C++14, viable for more uses; now with added power, in C++17 will it become an important tool in the programmer's toolkit? In this talk we will examine the possibilities and power of constexpr and explore what can (and what should) be done at compile-time with C++17. We'll present techniques for building constexpr data structures and algorithms, and look at what the standard provides and where it can improve. We'll also explore constexpr use of user defined literals for expressive compile-time abstractions. Compile-time computation offers perhaps the ultimate zero-cost abstraction, and this talk attempts to gauge the power available with C++17 constexpr.

We’ve all heard horror stories about bugs that were near-impossible to root-cause, and many of us have at least a few stories of our own. Corrupted or uninitialized memory. Resource leaks. API misuse and race conditions. Occasional and inconsistent crashes where all you have to go on are a series of unhelpful crash dumps. These kinds of problems are often time-consuming and tedious to debug, and can be both draining and infuriating. Time Travel Debugging (TTD) is a reverse debugging toolkit for Windows that makes debugging these kinds of problems far easier, in both small programs and commercial-scale software like Windows and Office. It's been an invaluable debugging tool for software developers and escalation engineers within Microsoft for many years. We’ve spent the last couple of years improving performance, scalability, and usability, and are excited to finally be able to release a public preview of Time Travel Debugging. In this interactive and hands-on session, we'll show you how to download and make use of our first public preview of Time Travel Debugging, demonstrate how to use TTD, and walk through the root cause analysis of some typically difficult-to-solve bugs like memory corruption, API misuse, and race conditions.

A lot of people hate build systems. What if using a library was just as easy as header-only libraries? EA has had a Secret Weapon called “packages” for over 14 years. EA's Packages are like Ruby’s Gems or Perl’s CPAN or Rust’s cargo. If you build a package from the package server it will download all of its dependencies. This talk will be about what we have learned about packages and versioning while building our large AAA games over the last 10+ years. Finally, what do we see for the future, like how will C++ modules fit in? In detail I will talk about:

Rian Quinn's "Making C++ and the STL Work in the Linux/Windows Kernels" from CppCon 2016 showed the difficulty of making C++ code work correctly in kernel mode. For some real-time systems, though, developing C++ applications that run in kernel mode "just works" as most of the necessary runtime support for Modern C++ is already available. Platform limitations, though, can offset the development gains that come with easy access to hardware. This talk will present a variety of issues — such as limited filesystem functionality, missing memory protection, limited debugging and performance monitoring tools, and constrained resources — that impact usage of standard C++ functionality and require additional due diligence on the part of the developer. Topics will include testing in user mode; kernel-mode exceptions; and programming the Intel performance monitoring hardware.

The C++ videos can be viewed in multiple languages by an audience a few dozen times broader than today. If you are a speaker and want your videos to be viewed by the whole world but not just by English-speakers then you can facilitate translation by providing original captions and translating to other language you speak. If you regularly analyse C++ videos in details, translate the fragments of it for yourself and would like to benefit the other people's translation then why not sharing the translation with the community? If you want your name to be recognizable you can write your name in the translation you submit. If you are interested in C++ videos with captions/translations then there is a resource that gathers in one place the links to C++ videos in the language of your interest - http://cppvap.wikidot.com/wiki:captio.... This talk tells how you can participate in this initiative and what you can get rfom it.

Automated trading involves submitting electronic orders rapidly when opportunities arise. But it’s harder than it seems: either your system is the fastest and you make the trade, or you get nothing. This is a considerable challenge for any C++ developer - the critical path is only a fraction of the total codebase, it is invoked infrequently and unpredictably, yet must execute quickly and without delay. Unfortunately we can’t rely on the help of compilers, operating systems and standard hardware, as they typically aim for maximum throughput and fairness across all processes. This talk describes how successful low latency trading systems can be developed in C++, demonstrating common coding techniques used to reduce execution times. While automated trading is used as the motivation for this talk, the topics discussed are equally valid to other domains such as game development and soft real-time processing.

We will provide a brief overview including an explanation of what Unicode is, string terminology, and how Unicode supports non US languages. We will cover the pros and cons of various String formats and encodings including UTF-8, UTF-16, UCS-4, etc. A time line of Unicode development will be shown and how other languages have handled string processing over the last twenty years. We will provide a brief overview of where strings are used, what can go wrong with strings, why string encoding is important, and how the CsString library solves a major problem with string handling. We will explain how the CsString library has changed our CopperSpice Gui libraries and improved string processing in DoxyPress. No prior knowledge of Unicode, CopperSpice, or DoxyPress is required.

Most embedded devices are multicore, and we see concurrency becoming ubiquitous for machine learning, machine vision, and self-driving cars. Thus the age of concurrency is upon us, so whether you like it or not, concurrency is now just part of the job. It is therefore time to stop being concurrency cowards and start on the path towards producing high-quality high-performance highly scalable concurrent software artifacts. After all, there was a time when sequential programming was considered mind-crushingly hard: In fact, in the late 1970s, Paul attended a talk where none other than Edsger Dijkstra argued, and not without reason, that programmers could not be trusted to correctly code simple sequential loops. However, these long-past perilous programming pitfalls are now easily avoided with improved programming models, heuristics, and tools. We firmly believe that concurrent and parallel programming will make this same transition. This talk will help you do just that. Besides, after more than a decade since the end of the hardware "free lunch", why should parallel programming still be hard?

Most embedded devices are multicore, and we see concurrency becoming ubiquitous for machine learning, machine vision, and self-driving cars. Thus the age of concurrency is upon us, so whether you like it or not, concurrency is now just part of the job. It is therefore time to stop being concurrency cowards and start on the path towards producing high-quality high-performance highly scalable concurrent software artifacts. After all, there was a time when sequential programming was considered mind-crushingly hard: In fact, in the late 1970s, Paul attended a talk where none other than Edsger Dijkstra argued, and not without reason, that programmers could not be trusted to correctly code simple sequential loops. However, these long-past perilous programming pitfalls are now easily avoided with improved programming models, heuristics, and tools. We firmly believe that concurrent and parallel programming will make this same transition. This talk will help you do just that. Besides, after more than a decade since the end of the hardware "free lunch", why should parallel programming still be hard?

IncludeOS is a library operating system, where your C++ application pulls in exactly what it needs and turns it into a bootable binary. But once you have your standalone program with standard libraries, what do you really need from an operating system? In this talk we’ll show you some exciting developments in unikernel OS- and hypervisor design, ranging from a single-function do-it-all hardware interface for everything needed to run a web server, to a full on object-oriented ecosystem giving your C++ application total control over everything from devices, drivers and plugins, to every protocol in an internet enabled host. We’re running a full IP stack on platforms ranging from full blown server hardware to inside a single unit test in userspace and we still want more. We’ll discuss how minimal can be combined with maximal - giving you lots of modern abstractions while keeping the final binary as lean and mean as possible.

C++17 reserves the namespace std2 (and others) for future iterations of the standard library that may not be 100% compatible in design with the current namespace std. This session will suggest a much simpler allocator model that might be useful for that new library. What is an allocator model, and why should we care? There are a variety of experiments and benchmarks around now demonstrating the benefits that a well-chosen allocator can bring to performance-sensitive code. We would like to bring those benefits to any new standard library, but without the complexity that plagues the specification of allocators in the current standard library. An allocator model is a set of rules for writing and supplying allocators to typed and objects, and the set of rules those types should follow when using a custom allocator. Following the principle that you should not pay for what you do not use, we will look into creating a model with minimal impact on code and complexity on users — in fact we will demonstrate (in theory) a model that will typically involve writing no code for users to support custom allocators in their type, and a runtime cost that can be entirely eliminated in programs that never choose a custom allocator! This presentation is a thought experiment in a possible future direction, and still a year or so away from becoming a proposal for standardization — in particular it will rely on creating a new language feature that we should demonstrate in a practical compiler. It offers a vision of a possible future for the language, and some of the problems that we would like to solve.

Dependency information together with the smart management of binaries and binary compatibility of Conan package manager can be used to implement a modularized, fast and efficient Continuous Integration (CI) process for large C and C++ projects. This CI system knows what needs to be rebuilt, what can be built in parallel, and how to transparently manage build dependencies as testing frameworks or toolchains (such as cross-compilation to Android). This talk will present a CI system, implemented for Jenkins (but which could be implemented in other CI systems too), that using the dependency graph provided by the package manager, is able to trigger dependent packages' build jobs, and only those transitively affected by the change, in the correct build order. Furthermore, the build jobs are arranged in concurrency levels, by the degree/ordering in the graph, but also for different configurations, so optimal build parallelism can be achieved. Also such dependent packages can define custom rules to decide to build themselves or not, depending on configuration or versioning criteria. Everything will be fully demonstrated in practical examples. We will also present advanced CI techniques, such as how to create packages for tools, like testing frameworks, to lately inject them as build-requirements to other libraries. Moreover, the process can also automate the installation and transparent usage of complete toolchains, like cross compiling C/C++ to Android with the Android NDK toolchain, to achieve a process that is convenient for developers and highly repeatable.

When MPC was asked to create a massive CG city for the film Alien: Covenant, they looked to leverage procedural generation as a means for iterating on the overall shape and structure of the city, in place of a prohibitively large team of environment artists. After evaluating all the practical third party options, it was ultimately decided that the best option was to build a custom tool to procedurally assist artists' city-building skills. This allowed for rapid iteration on the overall look of the city by striking a balance between manual and procedural techniques. The core algorithms were written in C++ for speed. The user interface was written in Python to accommodate quick feature changes, and a dash of Fabric Engine's KL helped with model import and rendering. This multi-language approach allowed the consistent application of the "best tool for the job" rule, which is a common pattern at MPC, allowing flexible teams with experts in a variety of skillsets. This talk will detail the history and development of MPC's city building tool, "Machi". Alan Bucior, Lead Developer of Machi, reviews the algorithms for city layout and building placement, discusses how to implement algorithms in an artist-driven manner, and shares various insights gleaned through the development process and discussion with stakeholders.

We already have array, vector, and unordered_map, what other data structures could we possibly need? As it turns out, there are a lot of them and they come from all areas of software! Curious to learn the latest method of representing a pathfinding search space in detailed 3D environments? Does efficiently detecting if a website could be malicious sound like an interesting problem to you? Perhaps understanding how AAA games store and track their entities so efficiently is more your speed? All these things and more can be yours in exchange for just one hour of your time! Using that hour we will delve into some of the unique challenges faced by C++ developers in a variety of domains, and learn the inner workings of the creative solutions devised to solve them.

Value semantics has been promoted in the C++ community for a long time, for reasons such as referential transparency, avoidance of memory management issues, and even efficiency in some cases. Move semantics in C++11 was a big step in language-level support for value semantics. In this talk, we’ll cover steps taken in C++17 for enhanced library-support for value semantics. Specifically, we’ll focus on `std::optional`, `std::variant`, and `std::any`. We’ll discuss what they are, their motivating use cases, and most importantly, identify existing patterns that can be improved by replacing it with one of these utilities. We’ll also cover some of the details such as: `std::monostate`, `std::variant`’s `valueless_by_exception` state, subtle difference in behavior between `std::optional＜T＞` and `std::variant＜std::monostate, T＞`, etc. The goal of the talk is to inform you of new library features in C++17, and to convince you of their usefulness and ultimately to add them to your toolbox.

C++11 introduced atomic operations. They allowed C++ programmers to express a lot of control over how memory is used in concurrent programs and made portable lock-free concurrency possible. They also allowed programmers to ask a lot of questions about how memory is used in concurrent programs and made a lot of subtle bugs possible. This talk analyzes C++ atomic features from two distinct points of view: what do they allow the programmer to express? what do they really do? The programmer always has two audiences: the people who will read the code, and the compilers and machines which will execute it. This distinction is, unfortunately, often missed. For lock-free programming, the difference between the two viewpoints is of particular importance: every time an explicit atomic operation is present, the programmer is saying to the reader of the program "pay attention, something very unusual is going on here." Do we have the tools in the language to precisely describe what is going on and in what way it is unusual? At the same time, the programmer is saying to the compiler and the hardware "this needs to be done exactly as I say, and with maximum efficiency since I went to all this trouble." This talk starts from the basics, inasmuch as this term can be applied to lock-free programming. We then explore how the C++ lock-free constructs are used to express programmer's intent clearly (and when they get in the way of clarity). Of course, there will be code to look at and to be confused by. At the same time, we never lose track of the fact that the atomics are one of the last resorts of efficiency, and the question of what happens in hardware and how fast does it happen is of paramount importance. Of course, the first rule of performance — "never guess about performance!" — applies, and any claim about speed must be supported by benchmarks. If you never used C++ atomics but want to learn, this is the talk for you. If you think you know C++ atomics but are unclear on few details, come to fill these few gaps in your knowledge. If you really do know C++ atomics, come to feel good (or to be surprised, and then feel even better).

In this talk, we will describe the effort of migrating the API of a reasonably large open source library to C++11. During the migration we wanted to benefit from as many new C++ features as possible, while preserving the semantics and features of the library. We will present various trade-offs in choosing a smart pointer strategy that was compatible with the existing object ownership model. The signal/slot mechanism, formerly based on boost.signals, was simplified and replaced by an implementation relying on lambdas, std::function and std::bind. Many smaller helper classes such as Boost.Any, Boost.Date_Time, and others were replaced by their standard counterparts. The minimum requirements of Wt 4 are C++11, but we will describe how C++14/17 are used if the compiler supports them. The main benefit of this transition is that the Wt API became more self-explaining, compilation times have been reduced, run-time performance improved, and the library's user requires less knowledge of boost. We will also discuss secondary consequences of the transition, such as simpler stack traces and the impact on compiler errors. Wt is an open source widget-based web GUI library, first released in 2006. Before C++11 came around, Wt could be considered to be written in a modern style C++, relying as much as possible on the standard library and using boost libraries for missing C++ features. Wt 4 is the next major release of the library, fully embracing C++11.

Want to make fast linked lists? Want to store sensitive data in memory? Want to place std::unordered_map in thread-local memory? Shared memory? How about GPU memory? You can do that in today’s C++ with allocators, the secret components of every STL container. Allocators went through a quiet revolution in C++11 and a major expansion in C++17. What did that give us? We'll look at the allocators available today in C++17, boost, TBB, and other popular libraries, and demonstrate some of the amazing things that can be achieved by taking the step beyond the stack and the heap. This talk is not about allocator implementation, but is a showcase of the things that can be done with off-the-shelf allocators available now and with C++17.

This tutorial is an introduction to x86 assembly language aimed at C++ programmers of all levels who are interested in what the compiler does with their source code. C++ is a programming language that cares about performance. As with any technology, a deep understanding of C++ is helped by knowledge of the layer below, and this means knowledge of assembly language. Knowing what the compiler does with your source code and the limitations under which it operates can inform how you design and write your C++. We learn how to generate, inspect and interpret the assembly language for your C++ functions and programs. We take a short tour of common assembly instructions and constructs, and discover why extreme caution should be exercised if we are trying to infer performance characteristics from a simple inspection of assembly code. Starting with a simple `operator+` for a user-defined class, we take a look at how interface and implementation choices affect the generated assembly code and observe the effect of copy elisions and related optimizations that compilers commonly perform.

We have often found it limiting that std::function cannot store callable objects if they are not copyable, so we developed and open-sourced folly::Function, a function wrapper that can store move-only callable objects. This presentation outlines the design decisions behind folly::Function and illustrates their consequences and our experiences after 18 months of wide production use at Facebook. We find folly::Function is more appropriate than std::function for typical use cases, such as storing callback functions and submitting tasks for asynchronous execution. Other features of folly::Function include that it is noexcept-movable, and it avoids some known issues regarding const-correctness in std::function, which allows to invoke non-const operations on a const reference. Instead, folly::Function lets you declare whether a callable may or may not mutate its state (e.g. folly::Function＜void() const＞).

Fuzzing is a family of testing techniques in which test inputs are generated semi-randomly. The memory unsafety of C++ has made fuzzing a popular tool among security researchers. Fuzzing also helps with stability, performance, and equivalence testing; and it’s a great addition to everyone’s CI. Our team has launched OSS-Fuzz, the Google's continuous fuzzing service for open source software, and a similar service for our internal C++ developers. Over 1000 C++ APIs are being fuzzed automatically 24/7, and thousands of bugs have been found and fixed. Now we want to share this experience with the wider C++ community and make fuzzing a part of everyone’s toolbox, alongside unit tests. We will demonstrate how you can fuzz your C++ library with minimal effort, discuss fuzzing of highly structured inputs, and speculate on potential fuzzing-related improvements to C++.

What would you like to know about the C++ standard? Join us for a panel discussion with the leaders of the C++ standards committee where the audience asks the questions. This we've got the the chairs of the Core Evolution and Language Evolution working groups, joined by the primary authors of such major upcoming features as concepts, metaclasses, ranges, modules, coroutines, compile time programming, and the spaceship operator.

With the success of GitHub, everybody and his brother is a library developer. Programmers love to create code, upload it to GitHub and hope for immortality. Most projects get only the most cursory examination before being passed over by users. Why is that? GitHub considered the problem. GitHub just published its 2017 Open Source Survey. The popular social coding service surveyed over 5,500 members of its community, from over 3,800 projects on github.com. It also spoke to 500 coders working on projects from outside the GitHub ecosystem. The Open Source Survey asked a broad array of questions. One that caught my eye was about problems people encounter when working with, or contributing to, open source projects. An incredible 93 percent of people reported being frustrated with “incomplete or confusing documentation”. see https://thenextweb.com/dd/2017/06/02/... Even the most experienced and dedicated software developers can't do it. This can be confirmed by looking over recent reviews of Boost libraries. The most common complaint is that the documentation isn't useable. Programmers love their stuff and hope to get people to use it, why don't they fix their documentation? The reason is simple: They don't know how. Problems a) It's tedious and boring to write b) Developers don't know what to include and what to exclude c) Tools make things harder d) Regardless of the amount of effort invested, the end result is usually of little or no value. This presentation will present a "Cookbook" and demonstration for creating documentation. Using this method will a) Much diminish the tedium of the task. b) Help improve to the quality of library design and implementation c) Create something that is useful to the library user. We will touch upon tools like Doxygen, etc. But this is only a small portion of the presentation. We use them so they deserve mention. But they don't cause the problem, and they don't solve it either.

In recent years, the GPU graphics community has seen the introduction of many new GPU programming APIs like Khronos' Vulkan, Microsoft's Direct3D 12, and Apple's Metal. These APIs present much more control of GPU hardware, but bring with them a great increase in complexity. We need to rethink the way we do graphics programming to take advantage of new features, while also keeping complexity under control. This talk presents solutions to recurring programming problems with these new GPU graphics APIs. These solutions are intended to simplify the complexity of the API by an order of magnitude, while simultaneously improving overall performance. This talk aims to discuss some key techniques for other developers to create their own GPU rendering engine. Topics covered include using a ring buffer to stream data and descriptors from CPU to GPU, scheduling GPU memory and work from the CPU, designing a multi-pass real-time GPU renderer, and using fork/join parallelism to increase the performance of the CPU code that submits GPU work.

Assume, we implement a very simple class having just multiple string members. Even ordinary application programmer prefer to make it simple and fast. You think you know how to do it? Well beware! It can become a lot harder than you initially might assume. So, let’s look at a trivial class with multiple string members and use live coding to see the effect using different implementation approaches (using constructors passing by value, by reference, by perfect forwarding, or doing more sophisticated tricks). Sooner than later we will fall into the deep darkness of universal/forwarding references, enable_if, type traits, and concepts.

If you build software for Windows, you use DLLs, and it’s likely that you may build DLLs of your own. DLLs are the primary mechanism for packaging and encapsulating code on the Windows platform. But have you ever stopped to think about how DLLs work? What goes into a DLL when you build it, what happens when you link your program with a DLL, or how do DLLs get located and loaded at runtime? Many of us build and use DLLs without fully understanding them. In this session, we’ll give an in-depth introduction to DLLs and how they work. We’ll begin by looking at what’s in a DLL—the kinds of things a DLL can contain and the basic data structures that are used—and the benefits and drawbacks of packaging code in a DLL. We’ll look at how DLLs are loaded, including the details of how the loader locates DLLs and maps them into the process; how dependencies are resolved among DLLs; and DLL lifetime and how DLLs get unloaded. We’ll also look at how DLLs get built, including what makes DLLs “special,” what goes into an import library, and how the linker uses import libraries. Finally, we’ll look at several other miscellaneous topics, including how DLLs interact with threads and thread-local storage, and mechanisms for solving or mitigating the dreaded “DLL hell.”

On the surface, function parameter default arguments seem like a very simple feature of the C++ language. This session explores how (not) true that is. If you like the dark corners of C++, you will come away with a new appreciation for this innocent looking syntactic sugar. Otherwise, you will have at least informed yourself on how not to blow your foot off with what looks like a slingshot.

Software development without test automation can no longer be considered professional. However, you might have existing code bases or want to rely on external libraries that may make writing effective and fast unit tests hard or even near to impossible. A typical work-around for these situations is to introduce test stubs for such external dependencies to make your code testable. Some propose to use mocking frameworks, such as GoogleMock, together with unit testing frameworks to ease the specification of the replacement objects. These mocking frameworks often come with their own domain-specific language (DSL) to describe the behavior and expected usage of the mock object. In addition to a learning curve, the DSLs often do not help much, when things do not work. The current lack of standardized reflection in addition requires macro trickery making fixing problems even harder. A second issue, is that existing code often must be prepared to suite the mocking frameworks interception mechanism to allow to inject the mock objects. Last but not least test-driven-development (TDD) together with the use of a mocking framework can lead to high coupling, that TDD usually strives to reduce. This talk demonstrates "classical" mocking frameworks, shows the problems and demonstrates how Cevelop's Mockator approach can help refactoring existing code to get it under test and how a very simple plain C++ solution can be used instead of complicated mocking framework for unit tests with dependent code replaced by test stubs or mocks. Outline:

Djinni is a tool developed by Dropbox for cross-platform C++ development. This session will give an overview of mobile cross-platform C++ development, an explanation of what Djinni does and why it is useful, and details on several Djinni-based app architectures I have used.

This session will introduce you to the C++ object model: the rules by which C++ class objects are translated into memory layouts. We'll quickly cover polymorphic class types and multiple and virtual inheritance. We'll discuss the anatomy of a virtual method call, the difference between `static_cast` and `reinterpret_cast`, and what's contained in a vtable besides function pointers. We'll see that the way `dynamic_cast` thinks about the class hierarchy is slightly different from the way we're used to drawing it; and that `dynamic_cast` is expensive enough that sometimes we can find cheaper ways to ask an object for its type! The climax will be a complete, bug-free, and fast implementation of C++'s built-in `dynamic_cast`, using our own hand-crafted artisanal run-time type information (RTTI). Attendees will incidentally be exposed to several features of the modern C++ language, including type traits and the `final` qualifier. This session will mostly be talking about the Itanium C++ ABI, which is the standard on Linux and OS X systems. Mapping these concepts to the MSVC ABI will be left as an exercise for the reader of the project's GitHub repo: https://github.com/Quuxplusone/from-s...

attern matching brings a declarative approach to destructuring and inspecting complex data types. It’s a very powerful abstraction provided by many programming languages such as Haskell and OCaml, and more recently, Rust, Scala, and Swift. We’ll see a glimpse of pattern matching in C++17 and their current limitations through features such as structured bindings, `apply`, and `visit`. We’ll then jump into MPark.Patterns, an experimental pattern matching library for C++. The following is an example of `fizzbuzz` written with the library:

We’ll see many more examples like this that lead to simpler, declarative code that focuses on the desired shape/state of the data, rather than a sequence of imperative code that tries to inspect the data in piecemeal. The goal of the library, and the talk is to gain experience and exposure to pattern matching in order to potentially help guide the design of a language-based pattern matching mechanism.

CMake is the build system chosen by most open-source C++ projects. While it is fully capable of helping you enforce a good modular design, those features are usually not well known or understood. In this talk I will present modern CMake practices that will simplify your project build and help you design better C++ components with clear dependencies and build interfaces (the sum of compile flags required to use a given library). We will first do a quick recap of the theory behind modular design, most of it coming from John Lakos' work on Large Scale C++ Software Development. Then we will see a few of the legacy CMake patterns that can be found in a lot of open source projects and explain their shortcomings. We will learn how to create a clean C++ library using modern CMake practices and depend on it in others modules. Finally, we will explore the options available to export the build interfaces for use by external projects. In this last part a few external tools will be discussed such as pkg-config and Conan.

The main focus of this talk will be about the importance of lockless containers and RCU technology. The value of this approach will be explained and why it was added to libGuarded. I will also cover recent changes made to the RCU containers. I will explain the importance of libGuarded and how it was used in the CsSignal library to prevent deadlocks. Either basic familiarity with multithreading or attendance in Part I of this talk is suggested.

C++17 is adding parallel overloads of most of the Standard Library algorithms. There is a TS for Concurrency in C++ already published, and a TS for Coroutines in C++ and a second TS for Concurrency in C++ in the works. What does all this mean for programmers? How are they all related? How do coroutines help with parallelism? This session will attempt to answer these questions and more. We will look at the implementation of parallel algorithms, and how continuations, coroutines and work-stealing fit together. We will also look at how this meshes with the Grand Unified Executors Proposal, and how you will be able to take advantage of all this as an application developer.

C++ is a language full of curiosities, and entices the curious. This session will will walk through half a dozen little code explorations of ideas that might have been solved in 5 minutes, but piqued my curiosity to keep digging and see just how completely or thoroughly they might be solved, and what we can learn about the language and the way it holds together along the way. Fundamentally, it is about the joy of exploring code long after the problem has been solved, to find those satisfying solutions to problems that don't need solving! There will not be much deep learning; instead, there will be numerous insights into corners of the language that are often (for good reason!) unexplored, that might help with the big picture when debugging some obscure bugs. In particular, constexpr and templates will be exercised, and some compiler limits may be tested. We will demonstrating code that will the the gamut of C++98 though to C++17, and even poke into experimental pending features such as concepts.

CNL is a numerics library born out of efforts to standardize fixed-point arithmetic. It provides number types which increase precision, enforce correctness and maintain efficiency. And by designing these types with composability in mind, the library aims to do for integers what the STL does for pointers. This introductory talk will show potential users how they can benefit from using CNL in a wide variety of applications. Firstly, the individual components will be illustrated using straightforward examples. Then we'll see how these components slot together to produce powerful new types. Finally I'll detail the steps necessary to adapt existing types to work within the CNL framework. Along the way, I hope to share some of the insights I've gained while learning about literal types including: why you shouldn't mess with `int` if you want zero-cost abstractions; how C++ is getting better at supporting new number types and my hopes for the forthcoming Numeric TS.

Abstracting a set of functionalities into a class which provides a higher level interface often requires tough design decisions. Users who do not have the exact requirements for which the abstraction is optimized will suffer a syntactic or run time overhead as a result. Alexandrescu's famous "policy-based design" provides a mechanism to allow the user to extend and customize an existing abstraction in order to fine-tune its functionality for many different use cases. This is however limited to use cases where each policy more or less represents a compile time strategy pattern. Alas, not everything is a strategy pattern. In this talk I will explore the viability of a more agent-pattern-like paradigm where each policy knows its requirements and publishes its capabilities. In this paradigm, glue code connecting any valid set of policies is automatically generated using template metaprogramming. This allows much more powerful customizations while maintaining static linkage.

The main focus of this talk will be about the importance of lockless containers and RCU technology. The value of this approach will be explained and why it was added to libGuarded. I will also cover recent changes made to the RCU containers. I will explain the importance of libGuarded and how it was used in the CsSignal library to prevent deadlocks. Either basic familiarity with multithreading or attendance in Part I of this talk is suggested.

Web services are flourishing, and C++ has some great libraries (such as Boost/Asio + Beast or CppRestSdk) which we can use as the basis to build such services. Yet it is still relatively inconvenient to define HTTP routes in C++. Most approaches available in online tutorials are based on manual manipulation of regex or HTTP concepts. In this talk, we will present the result of our work toward creating a clean HTTP routing library, usable on top of any HTTP transport layer library, which offers a terse and declarative syntax, composable routes, type-safety and a rich set of additional features such as generating sample routes or documentation. We will discuss our initial investigations, and explain why we chose a functional-programming-based approach over reflection-based designs such as are common in the object-oriented world. You will learn about some design choices which allowed us to come closer to the “Don't Repeat Yourself” ideal: maximizing the services offered for the information provided by the client of the API.

C++ is a wonderful and expressive language, that gives programmers a lot of freedom even though it actively seeks to let programmers obtain the maximal performance from their hardware. It so happens that sometimes, operating systems can make it easy to do things that are absolutely not natural for a C++ program, but that some C++ programmers consider essential to their practice. This talk will explore the problem of adding functionality to the language, more specifically to the standard threading library, where said functionality is not a natural fit for the C++ language specification. Expressed otherwise: how can we find ways to meet the needs of users without corrupting the language we all love? This talk will be more interesting to you if you have met situations where you wanted to do something in "pure C++" but found you had to resort to operating system-specific features to meet your objectives. We will discuss the design space that has been explored for the problem under study, and will try to make emerge the strengths and weaknesses of the various alternatives.

Are allocators worth the trouble? What situations merit their use? How are they applied effectually? What’s the performance impact? This practical talk by large scale C++ expert Dr. John Lakos demonstrates that having allocators in your tool box may lead to orders of magnitude speed improvements. The runtime implications of the physical location of allocated memory is often overlooked, even in the most performance critical code. In this talk, we will examine how the performance of systems can degrade when using `new`/`delete` and `std::allocator` . We will contrast these global allocators, which allocate memory globally for a system, with local allocators that each allocate memory for a proper subset of objects in the system. We will also demonstrate how local allocators can reduce or entirely prevent the degradation seen in systems that rely on the global allocator. Six dimensions – fragmentability, allocation density, variation, locality, utilization, and contention – will be introduced to depict the potential for performance penalties and aid the listener in determining which local allocator will offer the best performance in their subsystems. Evidence will be presented that identifying these dimensions, and selecting a local allocator based upon them, can lead to *order-of-magnitude* reductions in run time compared to systems using a global allocator.

The term "Modern C++" can be traced back to Andrei Alexandrescu's "Modern C++ Design", published in February 2001. Much has changed since then. Alexandrescu is off Dabbling in various things, Scott Meyers has retired; C++11 changed the landscape, then C++14, and now we are at C++17, with more on the way. Clearly, we are now in the Postmodern C++ era. So let's apply postmodernism to programming. YOU WON'T BELIEVE WHAT HAPPENS NEXT:

- How to concentrate on one section of a programme at a time, and test in isolation. QA HATES HIM.

- post-modern introspection?. IT WILL SHOCK YOU.

- you'll NEVER BELIEVE what a postmodern smart ptr LOOKS LIKE!

Although this is a lighthearted talk, it also aims to be insightful. In fact, the goal is nothing less than to change the way you think about programming.

C++17 adds many new features: structured bindings, deduction guides, if-init expressions, fold expressions, if constexpr, and enhanced constexpr support in the standard library. Each of these features are interesting, but what will be their cumulative effect on real code? We'll explore how each feature may (or may not) help in real code for enhanced readability, compile time performance and runtime performance. — Jason Turner: Developer, Trainer, Speaker Host of C++Weekly https://www.youtube.com/c/JasonTurner..., Co-host of CppCast http://cppcast.com, Co-creator and maintainer of the embedded scripting language for C++, ChaiScript http://chaiscript.com, and author and curator of the forkable coding standards document http://cppbestpractices.com.

An introduction to the design and compatibility goals for Abseil - Google's new common C++ libraries project. I'll summarize some style points and policies that affect Abseil and its users, and demo hands-on many of the debugging features of the library.

C++ gives you enough rope to shoot your leg off. Readable (and thus easy to maintain, easy to support) and error-free code in C++ is often hard to achieve. And while modern C++ standards bring lots of fantastic opportunities and improvements to the language, sometimes they make the task of writing high quality code even harder. Or can’t we just cook them right? Can the tools help? In this talk I’ll highlight the main trickiness of C++, including readability problems, some real-world issues, problems that grow out of C++ context-dependent parsing. I’ll then try to guide you in how to eliminate them using tools from the C++ eco-system. This will cover code styles and supportive tools, code generation snippets, code analysis (including CLion’s inspections and Data Flow Analysis, C++ Code Guidelines and clang-tidy checks), refactorings. I will also pay some attention to unit testing frameworks and dependency managers as tools that are essential for the high quality code development.

C++17 is often quoted as “just a better C++14”, suggesting that nothing is new, nothing is changing the way we program. This talk presents class template argument deduction as a counterexample, a hidden gem in the new standard. Saves typing? A replacement for the `make` functions? If that’s your frame, then you should come to this talk. The true power of class template argument deduction is underestimated. It’s a new point of abstraction but requiring creativity, insights, and understanding about the language details to manage. This talk will start by introducing all matters about this feature to build up sufficient background knowledge, followed by teaching how to write deduction guides by examples, and finally explain how to build abstractions using the whole feature in a top-down approach, with patterns categorized.

Designing a fast IP stack from scratch is hard. Using delegates made it all easier for IncludeOS, the open source library operating system written from scratch in modern C++. Our header-only delegates are just as fast as C-style function pointers, compatible with std::function, and allows any object to delegate work to stateful member functions without knowing anything about the class they belong to. We use delegates for everything from routing packets to creating REST endpoints, and most importantly to tie the whole IP stack together. In this talk we’ll show you how we use delegates in IncludeOS, discuss pitfalls and alternatives, and give you all you need to get started.

Whole program optimization enables higher performance in C++ applications, because of the expanded scope for analysis and optimization. However, the memory and time required to optimize the entire program together as a single unit traditionally has made whole program optimization infeasible for complex and large C++ applications, such as those being built at Google. Additionally, traditional whole program optimization frameworks have not supported fast incremental builds. ThinLTO (Thin Link Time Optimization) is a new compilation model that was recently deployed in the LLVM compiler toolchain to enable scalable whole program optimization for these huge C++ applications, and additionally enables the fast incremental builds required for use in day-to-day development. In this talk we’ll describe why whole program optimization is beneficial for C++ applications, how the ThinLTO compilation model enables scalable and incremental builds, and how ThinLTO can be integrated with distributed build systems for even faster whole program builds. Additionally, we’ll describe implications for C++ developers.

Have you ever tried writing a web application with C++? Can opening a file and serving it via HTTP be as simple as writing 20 lines of python? With the undeniable importance of web development, C++ can not allow itself to ignore such an important field, especially with the rising competition in the field of system programming languages, coming from Rust, D and Go. Join us as we explore modern approaches to asynchronous IO, socket communication the advantages and disadvantages of using a unikernel and their respective performance implications. We'll also take a look at how future iterations of the C++ standard library, could solve some of those problems.

The ink on C++17 has merely dried, but the major compilers support most features already. It's high time for a reality check! This talk is a report about the ongoing effort of porting sqlpp11 to C++17. I'll show real-world usage of the following features:

“A 14 year old code base under active development, 2.5 million lines of C++ code, a few brave nerds, two powerful tools and one hot summer…”, or “How we managed to clang-tidy our whole code base, while maintaining our monthly release cycle”. Did I mention that we’re a Windows-only dev team using Visual C++ ? That’s right, we’re going to continue using both Visual Studio (2017) and Clang tools on the side, to modernize and improve our code quality. I’ve just come back from an interesting journey … and I want to share with you some of the most exciting experiences my team and I had along the way and a few things we’ve learned that you may take with you on your next “travels”. It all started a year ago, at CppCon, with a simple but life changing decision: we would stop worrying about whitespace and started our addiction on smart C++ tools with clang-format. We didn’t realize this at that time, but this was just the first leg of our great journey; next we decided to hop on the clang-tidy train and set out to modernize our aging code base and find hidden bugs along the way with clang-tidy static analyzer. The hard part was getting all our code to compile with clang, using the correct project settings (synced with Visual Studio) and Windows SDK dependencies (our code has a fairly wide Windows API surface area). After that, clang-tidy was a breeze to use and we immediately integrated it in our workflow. I still cannot believe the code transformations we were able to do with its ‘modernize’ modules and some of the subtle latent bugs we found and fixed with its static analyzer and ‘cppcoreguidelines’. Luckily, we took a lot of pictures and kept a detailed travel log, to share this fruitful journey with you, now. We’ll also share some tools we developed, to help you with this workflow: automation tips & configs (Jenkins, MSBuild), open-source PowerShell scripts (clang-tidy on Visual Studio projects), free Visual Studio extension and more.

Software keeps changing, but not always as fast as its clients. A key to maintaining a library in the long run is to ensure a proper versioning of the API and ABI. Not only does this gives a clear picture of both source and binary compatibility between the versions, but it also helps design by making breaking changes explicit to the developer. In this talk I will define API and ABI in terms of impacts on compatibility, explain the difference between breaking and non-breaking changes and present a few techniques to handle them. We will quickly explain what APIs are, with an emphasis on the notion of contracts. Then the usually lesser known notion of ABI will be explained, going over the concepts of call syntax, mangling and most importantly sizes, alignment and offsets in data structures. We will see how to use semantic versioning (semver) in C++ by considering not only changes to the API but also to the ABI and offer some advice on how to change API and ABI over time and how to minimize the impacts.

The C++ Core Guidelines were announced at CppCon 2015, yet some developers have still never heard of them. It's time to see what they have to offer for you, no matter how much C++ experience you have. You don't need to read and learn the whole thing: in this talk I am pulling out some highlights of the Guidelines to show you why you should be using these selected guidelines. For each one I'll show some examples, and discuss the benefit of adopting them for new code or going back into old code to make a change. Beginners who find the sheer size of the language and library daunting should be able to rely on the Guidelines to help make sane choices when there are many ways to do things. Experienced C++ developers may need to leave some of their habits behind. Developers along this spectrum could benefit from seeing what the Guidelines have to offer, yet the guidelines themselves are just too big to absorb all at once. My examples will be chosen to be beginner-friendly and the focus will be on what's in it for you: faster code, less bugs, and other tangible benefits.

This session is intended to help the advanced programmer to understand what coroutines and fibers are, what problems they solve and how they should be applied in practice. The session begins with an overview of these concepts, comparing them with threads, and demonstrating how they are exposed by the Boost libraries. Apart from being clean and succinct as Boost libraries typically are, the authors of these libraries have gone to great lengths to ensure that fibers and coroutines expose a programming model consistent with that of threads. This will make them seem very familiar. During the session I will demonstrate how fibers and coroutines can be used together with the powerful Boost.Asio library to solve some commonly occurring problems. To conclude, I will provide some practical tips and guidelines for those who are adding fibers and coroutines to their programming diet.

Software development of automotive control units has long been in the hands of hardcore C developers. With the increasing need for high-performing, multi-core processors and for applications that can be updated over the Internet, this has changed. The recently released Adaptive AUTOSAR standard fully embraces C++11/14 as its language of choice. This leverages new opportunities for AUTOSAR applications, but also poses new challenges to ensure functional safety and to train developers. Let’s have a look at some Adaptive AUTOSAR APIs and at the AUTOSAR “Guidelines for the use of the C++14 language in critical and safety-related systems” and see how they fit into the bigger picture.

The AWS SDK for C++ was designed with a few important tenets. Modern C++ (versions 11 and later), Cross-Platform, User Customization with sane defaults, and no dependencies. A year after launching for general availability, we've been thinking about how these tenets have served us well, and the challenges we've encountered when applying them. In this talk, we will discuss the difficulties we encountered in design and implementation, and then we will cover the aspects of our design that have worked out well. The topics we will cover are: Build System choices, the C++ standard library, Dependency choices, Threading models, Memory models, IO-based programming, ABI compatibility, and packaging.

In 2003 we published "C++ Templates - The Complete Guide". Now, 14 years and 3 major C++ versions later, we are publishing the second edition. The content grew and changed dramatically. And I, the representative application programmer among the authors, learned a lot while at the same time shaking my head again and again. This talk is a personal overview of the changes Modern C++ brought to generic C++ programming and what that means for ordinary application programmers. It's not only about new features, it's also about the discussions we had regarding style and usability (for example, about our recommendations of how to declare parameters in function templates).

Are allocators worth the trouble? What situations merit their use? How are they applied effectually? What’s the performance impact? This practical talk by large scale C++ expert Dr. John Lakos demonstrates that having allocators in your tool box may lead to orders of magnitude speed improvements. The runtime implications of the physical location of allocated memory is often overlooked, even in the most performance critical code. In this talk, we will examine how the performance of systems can degrade when using `new`/`delete` and `std::allocator` . We will contrast these global allocators, which allocate memory globally for a system, with local allocators that each allocate memory for a proper subset of objects in the system. We will also demonstrate how local allocators can reduce or entirely prevent the degradation seen in systems that rely on the global allocator. Six dimensions – fragmentability, allocation density, variation, locality, utilization, and contention – will be introduced to depict the potential for performance penalties and aid the listener in determining which local allocator will offer the best performance in their subsystems. Evidence will be presented that identifying these dimensions, and selecting a local allocator based upon them, can lead to *order-of-magnitude* reductions in run time compared to systems using a global allocator.

RCU (Read, Copy, Update) is often the highest-performing way to implement concurrent data structures. The differences in performance between an RCU implementation and the next best alternative can be striking. And yet, RCU algorithms have received little attention outside of the world of kernel programming. Largely, this is because the most common drawback of RCU solution is complicated, and often wasteful, memory management. Kernel code has some advantages here, whereas a generic solution is much harder to design. There are, however, cases when RCU is simple to use, offers very high performance, and the memory issues are easy to manage. In fact, you may already be using the RCU approach in your program without realizing it! Wouldn't that be cool? But careful now: you may be already using the RCU approach in your program in a subtly wrong way. I'm talking about the kind of way that makes your program pass every test you can throw at it and then crash in front of your most important customer (but only when they run their most critical job, not when you try to reproduce the problem). In the more general case, we have to confront the problems of RCU memory management, but the reward of much higher performance can make it well worth the effort. This talk will give you understanding of how RCU works, what makes it so efficient, and what are the conditions and restrictions for a valid application of an RCU algorithm. We focus on using RCU outside of kernel space, so we will have to deal with the problems of memory management... and yes, there will be garbage collection.

Undefined behavior is a clear and present danger for all application code written in C++. The most pressing relevance is to security, but really the issue is one of general software correctness. The fundamental problem lies in the refusal of C++ implementations (in general) to trap or otherwise detect undefined behaviors. Since undefined behaviors are silent errors, many developers have historically misunderstood the issues in play. Since the late 1990s undefined behavior has emerged as a major source of exploitable vulnerabilities in C++ code. This talk will focus on trends in the last few years including (1) increased willingness of compilers to exploit undefined behaviors to break programs in hard-to-understand ways and (2) vastly more sophisticated tooling that we have developed to detect and mitigate undefined behaviors. The current situation is still tenuous: only through rigorous testing and hardening and patching can C++ code be exposed to untrusted inputs, even when this code is created by strong development teams. This talk will focus on what developers can and should do to prevent and mitigate undefined behaviors in code they create or maintain.

Undefined behavior is a clear and present danger for all application code written in C++. The most pressing relevance is to security, but really the issue is one of general software correctness. The fundamental problem lies in the refusal of C++ implementations (in general) to trap or otherwise detect undefined behaviors. Since undefined behaviors are silent errors, many developers have historically misunderstood the issues in play. Since the late 1990s undefined behavior has emerged as a major source of exploitable vulnerabilities in C++ code. This talk will focus on trends in the last few years including (1) increased willingness of compilers to exploit undefined behaviors to break programs in hard-to-understand ways and (2) vastly more sophisticated tooling that we have developed to detect and mitigate undefined behaviors. The current situation is still tenuous: only through rigorous testing and hardening and patching can C++ code be exposed to untrusted inputs, even when this code is created by strong development teams. This talk will focus on what developers can and should do to prevent and mitigate undefined behaviors in code they create or maintain.

C++ Modules TS is now implemented (to various degrees) by GCC, Clang, and MSVC. The aim of this talk is to provide practical information on the mechanics of creating and consuming modules with these compilers. It is based on our experience adding modules support to the build2 toolchain and then modularizing some of its components. We start with a brief introduction to C++ modules, why we need them, and how they relate to other physical design mechanisms, namely headers, namespaces, and libraries. Next we explore the kind of integration modules will require from a C++ build system. Specifically, when and where a module binary interface is built? How can a build system discover which modules are needed? What are the implications for parallel and distributed builds? Can we finally get rid of the preprocessor? And what happens to header-only libraries in this brave new modularized world? With a firm understanding of the implications C++ modules have on the build process, we can try to answer some of the module design questions: What is an appropriate module granularity? Should we have separate module interface and implementation units? Can we have a dual header/module interface for legacy support? Are module-only libraries to become all the rage?

This session will present how to leverage C++'s diverse set of analysis tools with existing Continuous Integration services to increase a project's quality continuously over time. In additional, we will discuss the advantages and disadvantages of using these tools using real world open source examples. Those interested in Continuous Integration or learning new ways to increase the quality of their code will enjoy this presentation. Continuous Integration (CI) is the act of continuously integrating small changes to a code base. The goal is to identify integration issues prior to making a change, ensuring a project's quality over time. Thanks to virtualization, today we have many services that provide automated continuous integration that support C++ including Travis CI and AppVeyor on Windows, Linux and MacOS. Typically CI is used to compile and sometimes execute automated tests to ensure a change to a project doesn't result in a compilation issue, or regression. C++ is however is a diverse, rich environment with numerous analysis tools available to C++ developers. These tools can be integrated into these CI services to provide automated analysis of any change being made to a project prior to it's acceptance to ensure the highest possible quality of the project. During this session we will step through an open source project designed to demonstrate how to integrate different C++ analysis tools into your CI services. These tools include static analysis (Clang Tidy, Coverity Scan, Codeacy and CppCheck), dynamic analysis (Valgrind and Google's Sanitizers), source formatting (Astyle and Clang Format), documentation (Doxygen), code coverage (Codecov, Coveralls, and LLVM's Software-based Code Coverage), cross platform tests (Windows, Cygwin, Linux, and macOS), compiler tests (GCC, Clang, and Visual Studio) and finally C++ libraries designed to assist in reliability and automated testing (Catch, Hippomocks and the Guideline Support Library). In addition we will openly discuss the advantages and disadvantages of using various analysis tools, how to integrate these tools into existing projects (both large and small) as well as common problems encountered while using these tools autonomously in a CI environment.

Type punning, treating a type as though it is a different type, has a long and sordid history in C and C++. But, as much as we'd like to deny its existence, it plays an important role in efficient low-level code. If you've ever written a program that examines the individual bits of a pointer or of a floating point number, then you've done type punning. Given its long legacy, some of the techniques for type punning that were appropriate, even encouraged, earlier in history now live in the realm of undefined behavior. We'll identify which techniques are now proscribed and postulate why. We'll also explore ways to do type punning in C++17 that sidestep undefined behavior and are hopefully as efficient as the older techniques. In this session we will look at: o Common (and some uncommon) motivations for type punning. o Techniques for type punning, both good and bad, all ugly. o Related topics (like type conversions and std::launder()) with an eye toward unspecified and undefined behavior.

Game audio programming is a sort of dark art practiced and understood by its few practitioners, but audio is an important and vibrant part of any game. There is a huge body of knowledge and history here, but the C++ standard, unfortunately, has yet to acknowledge the existence of audio output devices. In this talk we'll discuss the current state of the art in game audio programming, and what steps we can take toward bringing real-time audio to the C++ standard. We will begin with first principles: representing waveforms and playback of sounds. With a few basic mathematical principles out of the way, we'll discuss how a low-level mixer works, and the sorts of tools that game audio builds on top of it. Finally, we will present a set of abstractions that are useful for real-time audio, and how they can be brought into the C++ standard.

We will discuss what reflection is and how it can be implemented in Modern C++. The techniques used will include a mix of C++11/14 features (void_t, tuple, index_sequence, variadic templates, auto functions, decltype(auto), constexpr, type_traits, etc), classic C++ features, and macros. We’ll use a couple of example libraries to show the essence and power of compile-time reflection and show how to simplify and improve their implementation with C++17 features such as inline variables, constexpr if, structure binding, fold expressions, and string_view. The first example is a library that can serialize a struct into any of a variety of data formats, such as JSON, XML, MessagePack, or a custom format. We’ll then apply the same techniques to implement an Object-Relational Mapping (ORM) library to serialize structs into the tables of any of a variety of databases, such as SQLite, MySQL, Postgres, etc. We’ll discuss some of the challenges and limitations of these techniques and what features could be added to C++ to improve support for compile-time reflection.

Have you ever tried writing a web application with C++? Can opening a file and serving it via HTTP be as simple as writing 20 lines of python? With the undeniable importance of web development, C++ can not allow itself to ignore such an important field, especially with the rising competition in the field of system programming languages, coming from Rust, D and Go. Join us as we explore modern approaches to asynchronous IO, socket communication the advantages and disadvantages of using a unikernel and their respective performance implications. We'll also take a look at how future iterations of the C++ standard library, could solve some of those problems.

Building an API easy enough for kids to understand (in C++) is a challenge. Every design decision, from the circuit board to the plastic can effect the results. We'll talk about product design, manufacturing, firmware, software, and the Arduino API as we cover the Jewelbots timeline from Kickstarter to shipping to distribution. Additionally, hear from the two girls who are the top Jewelbots from the Bellevue area! You'll learn what they have built and how they view the future of C++.

For the past year or so, I have worked with Herb Sutter on language support for compile-time programming, reflection, metaclasses, and code injection for the C++ programming language. This talk will focus on the related language features of static reflection and projection. These features aim to help programmers work with source code as data, and in some limited ways, use that data to write software. I plan to trace the evolution of this work from its original proposal to its current implementation in the Clang C++ compiler (two implementations, actually). In particular, I will discuss design criteria, decisions, and issues related to reflection and projection as we implemented and experimented with them. I will also discuss how our current approach is shaped by alternative proposals, community and committee feedback, and restrictions imposed by the C++ programming language itself (i.e., what can you do and what can't you do).

Networking is coming to a standard near you — but how do you use it? Based on similar concepts found in Boost.Asio, the Networking TS provides a rich API for synchronous and asynchronous network communications. The library boasts an impressive TTHW indicator (Time To Hello World); however, implementing robust client and server solutions often baffles newcomers and seasoned practitioners alike. Inspiration for this talk comes from the questions we have received on IRC, Slack, reddit, private emails, and classes we teach. In this tutorial, Michael will provide a quick crash-course on using the Networking TS for asynchronous communication and then present patterns and idioms used at Ciere to address subjects including:

* Lifetime issues

* Clean startup and shutdown

* Timeouts, errors, and exceptions

* Taming events

* Decoupling and layering

This session will be of interest to individuals wanting to get started with the Networking TS or who need some inspiration in building robust systems. Many of the techniques presented will also be applicable with Boost.Asio and the standalone Asio libraries.

The title of this talk pays tribute to the "Effective Qt" columns, a series of blog posts started by Marc Mutz many years ago, whose title was in turn inspired by Scott Meyer's book series. The purpose of the columns was to provide in-depth explanations about Qt data structures and design patterns to C++ developers, who wanted to know more about how to use Qt core classes, and how to use them "effectively". This talk aims to be an up-to-date version of (some of) the advices in the columns, in the light of the major changes introduced to core classes in Qt 5's lifetime (including changes that will come with Qt 5.10, scheduled to be released at the end of 2017). Moreover, we will see how the language and Standard Library features added to latest C++ standards interact with long-established practices when developing Qt code. The talk is structured as a series of best practices, guidelines and tips&tricks, learned from many years of experience developing Qt-based projects, as well as the significant effort spent developing Qt itself and steering its evolution. For each advice, a technical discussion of the rationale behind it will be provided, and possibly some indication about future developments and what to expect in upcoming Qt versions. The topics mentioned in this talk cover many areas in Qt, and should contain something new or interesting for Qt developers using C++, hopefully helping them to build quality libraries and applications. The main focus areas will be around Qt containers (and their algorithms) as well as Qt string classes. Attendees are expected to have some working knowledge of Qt C++ APIs (and especially C++ APIs in QtCore).

Learn new ways to think about class design, that you can apply to your own projects! In this talk we'll start with a simple class that models an HTTP message. We’ll go over the limitations of the simple declaration, then walk through a series of guided improvements. We will explore ways to think about class models, create a concept as a customization point, perform type checking, and document a concept. The example class we will explore is based on the message container found in the Boost.Beast library. You do not need to know anything (or care) about network protocols. This is about building better classes.

Mutexes have frequently been observed to outperform reader-writer locks in domains where, logically, reader-writer locks should dominate. I was recently given an opportunity to address this inconsistency and, to demonstrate my certainty of success, accepted a bet regarding outperforming a mutex for a high read, low write work task with short — but not extremely short — lock hold times. I lost the bet. I resolved to understand how I lost this bet and, in my mind at least, convert this "loss" to a "win". The bet focused on a Linux platform (the evaluations presented are multi-platform). This presentation will discuss design criteria for a reader-writer lock, the "losing" implementation, the performance results for the "losing" implementation, a possible explanation for the loss, the novel "winning" implementation, and the results supporting the value of the "winning" implementation. A basic understanding of mutexes, reader-writer locks, and atomic operations is recommended for attendees.

With the advent of modern computer architectures characterized by — amongst other things —many-core nodes, deep and complex memory hierarchies, heterogeneous subsystems, and power-aware components, it is becoming increasingly difficult to achieve best possible application scalability and satisfactory parallel efficiency. The community is experimenting with new programming models which are based on finer-grain parallelism, and flexible and lightweight synchronization, combined with work-queue-based, message-driven computation. Implementations of such a model are often based on a framework managing lightweight tasks which allows to flexibly coordinate highly hierarchical parallel execution flows. The recently growing interest in the C++ programming language in industry and in the wider community increases the demand for libraries implementing those programming models for the language. Developers of applications targeting high-performance computing resources would like to see libraries which provide higher-level programming interfaces shielding them from the lower-level details and complexities of modern computer architectures. At the same time, those APIs have to expose all necessary customization points such that power users can still fine-tune their applications enabling them to control data placement and execution, if necessary. In this talk we present a new asynchronous C++ parallel programming model which is built around lightweight tasks and mechanisms to orchestrate massively parallel (and distributed) execution. This model uses the concept of (std) futures to make data dependencies explicit, employs explicit and implicit asynchrony to hide latencies and to improve utilization, and manages finer-grain parallelism with a work-stealing scheduling system enabling automatic load-balancing of tasks. As a result of combining those capabilities the programming model exposes auto-parallelization capabilities as emergent properties. We have implemented the this model as a C++ library exposing a higher-level parallelism API which is fully conforming to the existing C++11/14/17 standards and is aligned with the ongoing standardization work. This API and programming model has shown to enable writing parallel and distributed applications for heterogeneous resources with excellent performance and scaling characteristics.

Slow builds block all C++ developers from the work being done. At Facebook we have a huge codebase, where the time spent compiling C++ sources grows significantly faster than the size of the repository. In this talk we will share our practical experience optimizing build times, in some cases from several hours to just a few minutes. The majority of the techniques are open sourced or generic and can be immediately applied to your codebase. Facebook strives to squeeze build speed out of everything: starting from a distributed build system, through the compiler toolchain and ending with code itself. We will dive into different strategies of calculating cache keys, potential caching traps and approaches to improve cache efficiency. We tune the compiler, specifically with compilation flags, profile data and link time options. We will talk about the benchmarks we use to track improvements and detect regressions and what challenges we face there. Finally, you will learn about our unsuccessful approaches with an explanation of why they didn't work out for us.

You'd like to improve the performance of your application with regard to memory management, and you believe this can be accomplished by writing a custom allocator. But where do you start? Modern C++ brings many improvements to the standard allocator model, but with those improvements come several issues that must be addressed when designing a new allocator. This talk will provide guidance on how to write custom allocators for the C++14/C++17 standard containers. It will cover the requirements specified by the standard, and will describe the facilities provided by the standard to support the new allocator model and allocator-aware containers. We'll look at the issues of allocator identity and propagation, and examine their implications for standard library users, standard library implementers, and custom allocator implementers. We'll see how a container uses its allocator, including when and how a container's allocator instance propagates. This will give us the necessary background to describe allocators that implement unusual semantics, such as a stateful allocator type whose instances compare non-equal. Finally, the talk will provide some guidelines for how to specify a custom allocator's public interface based on the semantics it provides.

Deep Learning is a subfield of artificial intelligence that employs deep neural network architectures and novel learning algorithms to achieve state of the art results in image classification, speech recognition, motion planning and other domains. While all machine learning algorithms are initially formulated in mathematical equations (the only programming language where single letter variable names are encouraged), they must eventually be translated into a computer program. Moreover, because deep neural networks can often be composed of many hundreds of millions of trainable parameters and operate on gigabytes of data, these computer programs have to be fast, lean, often distributed and squeeze every last ounce of performance out of modern CPUs, GPUs and even specialized hardware. This is synonymous with saying machine learning algorithms are usually implemented in C or C++ under the hood, even though libraries like TensorFlow, Torch or Caffe expose APIs in Python or Lua to ease the process of research and speed up iteration. This talk aims to break the single responsibility principle and do three things at once:

1. Give a sweeping introduction to the state of the art in deep learning,

2. Give examples of what it means to implement neural networks in C++, from an implementer's perspective,

3. Give examples of building deep learning models in C++, from a researcher's perspective.

Here, the distinction between building and implementing is that the former means stacking together high level modules to achieve some machine learning task, while the latter means actually writing the CPU or GPU kernels that make the magic happen. The goal of the talk is for every attendee to walk away with a general understanding of the state and challenges of the field and hopefully be in a position to implement and build their own deep learning models.

With the advent of a new, persistent-memory-enabled world, the current software industry must prepare for changes. Looking forward to meet the new requirements set by this new type of hardware, a new standard API should be introduced to ease the adoption of this new and exciting technology. During the development of the NVM (Non Volatile Memory) Library, it became apparent that the C API is complex and hard to use. To remove some of the pain points, a proposal of a new C++ API was made. This lecture will introduce the API and explain some of the intricacies behind it. This entails both the basic concepts of persistent memory programming, like pointers and transactions, and a prototype integration with the standard library's containers. Hopefully this will spark a discussion and will help validate the proposed changes. Deciding on an API this early on will help developers in the early adoption of this potentially game-changing technology.

C++ Coroutines come naked. Just the language feature, no library support apart from a few traits that allow developing coroutine adaptors. In this session we will start with just a compiler that implements a coroutine TS and a reference networking TS implementation and through (mostly) live coding together we will develop a cool, efficient and beautiful async networking app.

Abstract: The most important aspects of rocket safety software development, from an idea, design, implementation to testing. Safe design patterns and critical error handling in fault tolerant systems.

- Open source libraries can take you to space: How to choose open source libraries to be used for Federal Aviation Administration (FAA) certification, and correct use of them depending on the required safety level. Also will discuss how to handle FAA hard requirements throughout software development cycle.

- Safe design patterns: Will discuss multiple design patterns to be used in safety critical systems, a compile time observer pattern using template metaprogramming will be discussed. Also guidelines to use a pattern depending on safety level, timing requirements, memory layout and testing.

- Error handling: Rocket errors are gold, precious and don’t want to lose them: When having an error is more important to get as much telemetry as possible before losing the rocket. Since testing a real rocket means a real mission, telemetry can make a difference for future flights and error handling is critical to achieve this. Will present error handling techniques in startup and run time including throwing policies, interfaces pre/post conditions and class interface design techniques to implement the error handling along with testing, also guidelines to use them depending on safety level and application, and deciding what is a fatal error.

This is a talk about solving the most difficult problem a software engineer ever faces, converting a large codebase with antiquated designs and spotty quality into a state-of-the-art, modern system. We'll be covering clang-based refactoring, mnemonic reasoning methods, safe rewrites, coding standards, and, oh yes, migration paths. If you've ever been tasked with making a legacy codebase the best-in-class, or think you might, then this talk is for you.