David's Blog

The single responsibility principle is a computer programming principle that states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.

It seems to generate a lot of confusion. Just a few days ago, Jon Reid had to clarify a misconception about that very principle:

SRP is misunderstood. Despite the name, it's not about some Platonic "single responsibility". No, it's "one reason to change." https://t.co/2iuCZH7oLd

There, he refers to the definition of Robert C. Martin, who expresses the principle as, "A class should have only one reason to change." In this post, I want to write down my own thoughts about the "Single Repsonsibilty Principle". I hope I can clarify a few things, and do not add too much new confusion ;)

A Definition that "Works for Me"

So, some of the confusion comes from "single responsibility" vs. "single reason to change". The definition that talks about "responsibilities" is hard to follow in practice: What exactly is a responsibility? What if I can divide the responsibility in multiple sub-responsibilities? How can I make sure that all the code is part of the same responsibility?

On the other hand, "one reason to change" sounds more like a heuristic to me than a real definition. Yes, when a class has many different responsibilities, it also has many reasons to change. But if that was the definition, we should rename the principle to "SRTCP" (Single Reason To Change Principle).

So, I was searching for a definition that works for me and that gives me some guidance in my day-to-day work. After discussing with several very good developers, I nowadays really like the following definition:

You can describe everything a design element (method, class, module, ...) does - at a reasonable level of abstraction - as a single, coherent thing.

In other words, if you use "CRC cards" to desing your classes, the "Responsibilities" column should contain a single bullet point.

If the level of abstraction is too high, you can describe everything as a single thing ("Is a part of the XYZ system"). If the level of abstraction is too low, everything has many responsibilities ("Execute assembler instruction x. Execute assembler instruction y. ..."). So what is a "reasonable" level of abstraction?

We'll come to that soon, after an example...

Hangman

When I teach TDD workshops, I ask the attendees to implement Hangman (the word guessing game) - even multiple times. I now want to discuss with you a few possible designs for implementing this game (none of them is the best possible design, of course).

Let's start simple. The whole problem is so easy, you can implement everything in a single class:

This design has a few disadvantages. Most glaringly, in a TDD class: It is really hard to test. Especially if you do not make any compromises like making methods public that should actually be private.

We can try to simply split the class along the four responsibilities that we have already identified:

Now you can easily test three of the four classes, and with some work, you can probably even test the UI. And you can test every class in complete isolation from the other classes, which is great for achieving stable tests...

Is 4 classes for such a simple problem over-engineering? Quit possibly. But I am trying to make a point here...

To add clarity to your design, make sure that all design elements in a level are roughly on the same level of abstraction (yes, there is gut-feeling involved in deciding that).

So, all the public methods in a class should be roughly on the same level of abstraction, and the class itself would be on a higher level. They delegate to private methods of that class to do the real work, which are on a lower level.

Sometimes you can find interesting responsibilities by looking at the tests of a class or method. And when you split it, you might need new design elements (a new package or a new class) to keep everything at roughly the same level of abstraction.

Tests and Responsibilities

So, I wrote some tests for the "Rules" class - This time using a different design, where I do not split out the game state to its own class. Here is the output of Jest, a JavaScript Test Runner:

Hangman - Implements the flow of a single Hangman game, given a secret word.
√ returns a hint that contains only underscores at the start of the game
√ shows a hint with the correct length for the secret word "test" at the start of the game
√ shows a hint with the correct length for the secret word "a" at the start of the game
√ shows a hint with the correct length for the secret word "few" at the start of the game
√ shows a hint with the correct length for the secret word "cases" at the start of the game
√ updates hint to "c____" after guessing "c" when word is "cases"
√ updates hint to "c_s_s" after guessing "c,s" when word is "cases"
√ updates hint to "c_ses" after guessing "c,s,e" when word is "cases"
√ does not update the hint when making a wrong guess
√ decrements the number of remaining tries after a wrong guess
√ does not decrement the number of wrong guesses after a right guess
√ indicates game is over ("Lost") when there was only one guess remaining and the user guessed wrong
√ indicates game is over ("Won") when the user guessed all letters of the secret word
√ does not accept any input after the game is over

Oh, some of these tests seem to belong together. Let's group them, and look at the test output again:

Hangman - Implements the flow of a single Hangman game, given a secret word.
Generates Hints from the secret word and the input
√ returns a hint that contains only underscores at the start of the game
√ shows a hint with the correct length for the secret word "test" at the start of the game
√ shows a hint with the correct length for the secret word "a" at the start of the game
√ shows a hint with the correct length for the secret word "few" at the start of the game
√ shows a hint with the correct length for the secret word "cases" at the start of the game
√ updates hint to "c____" after guessing "c" when word is "cases"
√ updates hint to "c_s_s" after guessing "c,s" when word is "cases"
√ updates hint to "c_ses" after guessing "c,s,e" when word is "cases"
√ does not update the hint when making a wrong guess
Keeps track of remaining guesses, so UI can draw the gallows pole
√ decrements the number of remaining tries after a wrong guess
√ does not decrement the number of wrong guesses after a right guess
Keeps track of whether the game is running or over (Won / Lost)
√ indicates game is over ("Lost") when there was only one guess remaining and the user guessed wrong
√ indicates game is over ("Won") when the user guessed all letters of the secret word
√ does not accept any input after the game is over

It seems like this class has three different responsibilities (at least at some level of abstraction). So, if I wanted, I could split this "Rules" class even further, into one class for each of the groups, and one to coordinate them. Then I would probably need a package to group these two "Rules" classes, and the responsibility of that package could now be "Implements the state changes of a single game, based on the rules".

Does it always make sense to split a class like that? That depends on a lot of things, but from the perspective of the Single Responsibility Principle, we could do it...

Conclusion

The Single Responsibility Principle gives you an indicator when to change your design. Split your methods / classes / modules when they have more than one responsibility. Restructure code when your classes / methods / modules do not fully encapsulate that responsibility.

When your design elements have many different responsibilities, they have many reasons to change. And they are also hard to test. When your design elements do not reasonably encapsulate their responsibility, changes will cascade through your code. And again, it will be harder to test.

But do not start to split all your classes alon their reponsibilities right away! The SRP should not be the only driving force of your designs - There are other forces, and sometimes they give you conflicting advice. Take, for example, the SOLID principles - 5 design principles, where the SRP is only one of them.

Being a freelance consultant / coach, I have worked with many different teams in the last 10+ years. As far as I am concerned, I never was the best developer on the team.

No, I do not have any proof for that, of course not. It is more of a mind set than something that can be objectively proven to be true or false. Let me explain...

Learning Every Day

Learn from everyone, follow noone.

If you hire me, I come to your company to learn something. Yes, I also come to provide more value for you than you pay for me - To teach you something, to coach your team, to help you solve a problem, to write some code. That's why you hired me.

But for me, it is also a learning experience. I am trying to get better at what I do every day. I read many books, blogs, and I try to learn from everyone. And I really think that you can learn something from everyone: Even the most senior developer might learn a thing or two from the most junior, if she is open to learning something.

Now, if I would be in a team and have an attitude like "I am the best X here", learning from everyone else just becomes a lot harder, at least in the field of X. Also, I think others on the team might notice that attitude, and this would also hinder mutual learning / teaching.

But What if I am the Best?

Let's assume, hypothetically, that I come to a team, where after some time, all my evidence suggests that I am really the best developer (I do not think this has ever happened to me). I would still try hard to think that I am not the best.

A little humility can help in many aspects. Mutual learning / teaching, like above, is one of them. A better team culture is another. When you ask for help, you encourage people to talk to you on the same level.

Too Many Dimensions

Also, there are too many dimensions where you can be good in software development. And a team working on any non-trivial task needs many of those dimensions.

So who is the best developer - The person who is good at keeping the overall architecture in mind, the person who knows all the details of your programming language, the person who is getting the details right, or anyone else on the team? You need all of them!

I think that I, personally, are quite competent in several programming languages, in software architecture / design, in techniques for better software quality, in facilitating better team practices. I think that, over time, I have become quite good at dealing with legacy code. I am also quite competent as a trainer and as a technical coach.

But I am probably not the best developer or tester or architect or coach or trainer you have ever seen.

Good in Multiple Disciplines

I think Scott Adams (I cannot find the quote anymore) once wrote that it is really hard to become world class in a single field, but when you become merely quite good in multiple fields, you might already be world class for this combination of fields. And this can be very valuable.

I always liked this idea, and I try to work like that. This is why I think I am competent in so many different fields, but surely not the best in any of them. And so far, this has worked quite well for me.

But even this is not what I am trying to say.

Keep Learning

My goal is to keep learning, to keep becoming better. And a little humility can help a lot here.

When my state of mind is "I am not the best X in here", the questions become "What can I learn from you?" or "How can you help me today?" and, of course, "How can we help each other today?".

But, of course, this is also something I have to constantly work on. I always have to remind myself to ask these questions. So this post is also a little reminder to myself ;)

Last week, I hosted an online training course for people from all over Europe. There they learned how to build web applications with React and Redux. We recorded Videos during the talk, and I prepared some more Videos and Training Materials for you. Get them here:

Of course there are defects in legacy code. But when you are a developer working on changing, refactoring or enhancing legacy code, many of the "defects" you'll find are probably desired behaviour (or have been, at some point in time). And even when they are not, you often still cannot be sure if you can fix the undesired behaviour without negatively affecting users. Let me explain...

...Changes Colour Unexpectedly

I once wrote some piece of really bad code that I use when I facilitate refactoring exercises at conferences, at user group meetups and during my trainings: The BabySteps Timer. A few days ago, I invited people on Twitter to refactor this code, as a challenge.

One of the developers who accepted the challenge, Franziska, submitted a defect report:

Clock resets to white colour when time is over

Not sure if bug or feature, if the time has expired, the clock turns red, starts counting down from 2:00 again and then it turns white after a couple of seconds (see screenshots).

I am fully aware that Franziska only reported some unexpected behaviour she saw, and did not have any intent of changing that behaviour during a refactoring exercise.

But this reminded me of a situation I experienced at a past client, where we actually changed some unexpected behaviour, and it didn't turn out well...

But... The Numbers are Wrong!

Some years ago, I was working with 2 Scrum teams as an agile coach. When I joined, they were already one year into a project where they were changing some legacy code: They refactored the server, and they completely re-wrote the client.

In addition to its main purpose, the software calculated some statistics. When we started to work on that feature, we found out that the numbers it calculated were wrong. Under some conditions, it just gave inaccurate answeres, that were at least in the right ballpark, but sometimes the numbers were not only inaccurate - just plain wrong.

None of the business analysts could remember what this statistics feature was used for, so we tried to find out who uses it. We looked at the logs of the last 6 months and found out: It was never used! Not a single time!

Our Product Owner asked some key stakeholders, but noone could explain to us why this feature would be required. So we removed it.

A few months later, the "Blocker" defects were rolling in: "The statistics feature is missing! We cannot do our work!" they said.

We found out that this feature was only used during 4-6 weeks of the year and only by some users, and we did not look at this time frame when searching the logs.

We tried to convince the users to allow us to change the statistics so they'll get the correct numbers. But we did not get a budget for that. They just wanted the old feature back, because "We only use them as ballpark estimates anyway". "And when the software gives you wrong numbers?" we asked - "Oh, we can spot that", they said.

So we put the faulty feature back in. But even after we put the feature back in, nobody wanted us to fix the defects...

Unexpected != Defect

In a real legacy code situation, don't expect that you have found a defect when you have found some really strange behaviour. Ask business analysts. Ask stakeholders. Ask real users. Gather data from usage statistics and logs.

And even after you did all this, you cannot be 100% sure...

Yes, there are defects in legacy code. And yes, you will find some of them. But, as a developer, beware of just fixing the defect (especially if you are new to the project): The strange behaviour might not be a defect at all. And even if it is one, your fix might cause more problems than the defect.

Yesterday I facilitated a "legacy code refactoring" session at the Softwerkskammer München Meetup. There were ~50 craftswoman and craftsmen, and all of them were coding, trying to improve some particularily bad code I wrote.

We did three different exercises, each of them for 30 minutes, and in all of them, we tried to bring a piece of bad code under test.

The Code

First I gave them a very short (actually too short - but that was on purpose) introduction to the code: The "Babysteps Timer". This is a very simple GUI program I wrote some time ago, because I needed a short example of really bad code that I could use for refactoring exercises.

The application is just a timer that counts down from 02:00 to zero. Ten seconds before reaching zero, it plays a sound. When it actually reaches zero, it plays a sound and changes the background color to red for a few seconds. When you reset it before it reaches zero, it changes the background color to green for a few seconds. That's it.

The code is reasonably short (a single class with ~150 lines without the Java boilerplate) and most variables are reasonably well named (for some definition of "reasonably" ;) ). Still it is very hard to change: The class has more than 10 different responsibilities, there are two threads, lots of inner classes, and every small part of the code is coupled to almost everything else.

The code makes testing particularily hard because it makes several calls to System.currentTimeMillis() and state changes take a long time: After starting the timer, you need to wait a full second (that's 1 000 000 microseconds!) until you see the timer changing. So when your tests can control how fast time progresses for the application, you can test the application more easily and the tests will run faster.

Exercise 1: Refactor, then test

Sometimes, it is so hard to find seams for testing in a bit of code that it makes sense to first refactor a bit to make the code more testable, and then add some tests. This was our first exercise:

Perform some refactoring to make the code testable, then write a test.

Participants could do any refactorings they wanted - Get rid of all the static stuff, extract inner classes to their own files, extract some methods, rename stuff - Whatever. But their goal should be to write a first test (any kind of test - unit test or integrated test or behavior test) after ~25 minutes of refactoring.

I told them to rely on their IDEs as much as possible to minimize the amount of manual testing, which slows you down considerably because a full timer cycle lasts 2 minutes.

As far as I know, nobody got to the point where they could write a first green test. And that was exactly the point of this exercise: I wanted to show them how hard it is to first refactor, then test. Even when it's often tempting. Especially in a code base like this, with so much coupling and so little cohesion.

Exercise 2: Test, then refactor

OK, so if we cannot easily make this code testable, maybe we can find a way to test it without changing it. At first, this code looks like there are no seams for testing, but there actually is one: The JTextPane that contains the whole user interface. You can get the whole HTML from this text pane, and you can invoke it's HyperlinkListener to simulate button clicks. So, our second exercise was:

Write some high-level functional tests using the timerPane's HTML and hyperlink listener. Then start to refactor

I also told them that, once they have a few tests, a good first refactoring would be to try to control the progress of time, because this would speed up their existing tests.

Writing those tests will not make the hard refactorings from Exercise 1 any easier, but at least you'll have a safety net of tests before you do it. When you do it right, you can write automated test for the whole user-visible functionality without making any big changes, and then speed them up by controlling time. Then you can start the refactoring with fast, automated tests as your safety net.

I don't think any pair actually did a bigger refactoring. Bot several people told me afterwards that it was an eye-opener for them how easy it was to write the tests before the refactoring, compared to refactoring first.

Exercise 3: Golden Master

I also showed them the Golden Master techniqe and told them that it is sometimes very well suited for testing legacy code. So this was our third exercise:

Add log statements to the code, save the log output as your golden master, and compare future runs to this golden master.

I added Samir Talwar's Smoke test framework and told them they could use it if they want. Or they could also just save the logs to files and compare them with whatever tool they had. Not everyone could use Smoke because some didn't have Ruby installed...

Even though the "Baby Steps Timer" is not very well suited for "Golden Master" testing, people were very interested in this exercise. Most didn't know "Golden Master" before, and some said they wanted to try it on some "real" code.

Fun and Learning

I had a lot of fun yesterday, and I learned a lot. I hope most or all attendees feel the same :) So thanks to everyone at Softwerkskammer München and LV 1871 who made this event possible.

If you want to run a legacy code refactoring session at your meetup or in your company, feel free to use the Baby Steps Timer. If you want some tips for facilitating the session, or if you have some ideas/improvements for me, just contact me - I'll be happy to help and/or listen. And if you want me to facilitate that session or run a longer training for you, I can do that too: Let's talk ;)

The only problem: My main laptop is running Fedora, and I do not want to switch to Ubuntu just for toying with a new technology. Well, I tried anyway, and I was pleasantly surprised that it was not very hard to set up. (Wouldn't it be really cool if Canonical and RedHat would work together on react native and renamed it to "React Native Linux"? Tell them!).

So, here's what I did to install and run the react native starter app on my Fedora system:

Clone the GitHub Repo

I will clone the react-native-ubuntu repository to a directory where I have my development libraries ("devel-libs" in this case):

And then you need to install sinopia (a local npm registry), uninstall the react-native command line interface, reinstall it from the ubuntu branch, change the registry for the globally installed npm to sinopia, ...

Wait, what?

Installing Stuff Globally?

So, the developers really want me to modify the global environment on my laptop? I won't do that. Especially not for playing around with a technology that I might not even need.

The thing is: If your library requires me to install anything globally on my development machine, you have a coupling problem. Which is an architectural problem. Go, fix it.

But: I wanted to try react-native-ubuntu anyway. I thought about using a Vagrant VM, but I first wanted to try to install everything locally - within my example project. And it worked! So throw away the react-native-cli/README.md and bear with me...

Install Qt Dependencies

I actually did install some dependencies globally - The Qt 5 libraries that are needed to compile and run the react-native app:

I think those were all. You might need different libraries, depending on your system setup...

Local Installation

Well, let's just try to install react-native locally, in a directory where I want to create my example application ("example"). "path/to/react-native" is the directory where I've checked out the ubuntu branch of react-native before.

Now I can run the react-native-cli from the local installation in "node_modules":

[example]$ node node_modules/react-native-cli/index.js init TestApp
Looks like React Native project already exists in the current
folder. Run this command from a different folder or remove node_modules/react-native

OK, react-native-cli wants to download react-native. That's why the official guide wants me to install sinopia (which I still want to avoid). Maybe there's a workaround? Let's delete the react-native module, let the cli do whatever it wants to do, and then install react-native from the ubuntu branch again...

In my last blog post, I created a basic setup for getting started with Spring Boot and React. But some things were still missing, like type-checking JavaScript with flow or running the mocha tests with gradle.

Once again, you can get the current status of the whole project on GitHub: dtanzer/example-react-spring-boot. Feel free to clone this project as a blueprint for your own projects...

Type-Check with Flow

We need some more dependencies to get started, so we'll add them to web-frontend/build.gradle:

IntelliJ IDEA and FlowType

But this only sets the language level. Right now, IntelliJ IDEA does not actually highlight flow type errors by itself. So to actually see the flow errors, you need to install a thrid party plugin: dsilva/webstorm-plugin-flow-typecheck/releases - Just download the latest release and install the plugin from disk.

Mocha, FlowType and Gradle

We also want to be able to run the flow type checks and our mocha tests from our Gradle build. There is a gradle plugin for running running tests with karma, but again I did not use it because I could not configure it to make it work with my project setup. So I just added two tasks that run flow and mocha for me:

Now you can run the checks and tests with Gradle - On your build server, for example.

Fix all FlowType Errors

When you run flow now, you'll see a lot of errors:

src/test/javascript/environment.spec.js:2
2: import { expect } from 'chai';
^^^^^^ chai. Required module not found
src/test/javascript/environment.spec.js:4
4: describe('the environment', () => {
^^^^^^^^ identifier `describe`. Could not resolve name
src/test/javascript/environment.spec.js:5
5: it('should at least run this test, and it should be green', () => {
^^ identifier `it`. Could not resolve name

To fix them, we first have to update web-frontend/.flowconfig so it actually includes all node_modules. But then it would produce some errors because of the "fbjs" module. I don't think they are critical right now, so I set this module to ignore. We also want to ignore everything in the "build" folder, because that's auto-generated code.

I did not change anything in the way development mode works (loading the files with system.js), because this works out of the box (at least with Chrome).

What Next?

Now the basic setup works for me - I can run the tests from the build, and I see type errors in my IDE. But there are still some things missing.

Running the tests from the IDE is probably a very simple task. We could do more interesting things, like making our web application isomorphic - I.e. rendering all the react components on the server when loading the application. Or we could actually start to implement something - Like the client/server communication. Maybe in the next blog post ;)

Right now, I am trying to deepen my react knowledge and skills. So I started to set up a little project where I can play around: A spring boot backend and a client application written with react and redux. You can get the whole source code on GitHub: dtanzer/example-react-spring-boot. Here is a really detailed guide that describes what I did to make it work (and often why I did it)...

Note: If you just want to play around with what I did, don't repeat all my steps, clone my github project. This guide is just here to explain why the code is how it is right now, and how all the parts play together.

Basic Project Structure

Nowadays I tend to split even the simplest project in multiple modules (I am using the IntelliJ IDEA jargon here) to make it easier to manage dependencies later when they grow. So my basic module structure for this project is:

react-spring-boot
+---common
+---spring-boot-webapp
\---web-frontend

spring-boot-webapp is the actual application. It contains the @SpringBootApplication class ("WebApplication" in this case) and the application.properties. It also has a dependency on all other modules to be able to run the whole application. This allows me to keep dependencies between other modules low, and to even apply the dependency inversion principle for modules.

common is basically a dumping ground for code that all (or most) modules need, and where I didn't find a better module name yet. I'll try to keep the code in this module to a minimum.

Later, I will need more modules for REST APIs, backend functionality, and so on. But right now, I just want to get the react app to work, so this is enough (or maybe even over-engineered, as some would say).

But I also want to manage my javascript dependencies with gradle. I have tried several plugins that promise to do this, but none did work for me. client-dependencies-gradle has a bug, for example, where it cannot resolve dependencies that have a circular dependency. So I juss call "npm" from the command line. I have also created a task for executing "browserify" (which packs all javascript resources into a single file):

Now we can start the application in IntelliJ, load it in the browser, and we should see a very simple page that just contains the text "Header...".

Test With Mocha

Now we can add a first unit test for the web frontend. This test does not actually test some of our code yet, it just verifies that we have set up the test system correctly. The tests also need some common setup. I'll also add a shell script to run mocha (the test runner to run our frontend tests):

Development Mode / Reload from Filesystem

So we already have a running application, but every time we change a javascript file, we have to re-run the browserify task and then re-start the application. This takes too much time. We want to be able to reload changed javascript files on the fly, without restarting the application.

Luckily, system.js is a great tool that does everything we need. But we only want to use it in development mode. In production, we still want to use our js/bundle.js that we create with browserify.

We only have to create different configurations for development and production and then create index.html dynamically so it either loads bundle.js or uses system.js.

And application-development.properties contains the overrides for development. It configures spring to load javascript files from the file system and to send a cache-control header with value "0" to the browser. Otherwise the browser would cache javascript files loaded by system.js and on-the-fly re-loading would not even work with a server restart.

Now everything works - You can start the server, load the application, change headerarea.js, reload in the browser, and you'll see the changed texts immediately.

What Next?

This basic setup works well for me right now, but there are still things left to do, for example:

Integrate redux

Integrate a REST backend

Run mocha test in the gradle build, and break the build when they fail.

Remember: If you just want to play around with what I did, don't repeat all my steps, clone my github project. This guide is just here to explain why the code is how it is right now, and how all the parts play together.

Do you have any questions? What else could I do? Is there anything I could do better? Please tell me!

We will have vegan food at SoCraTes Day Linz 2016. OK, we will probably have a non-vegan option at some of the meals, but the default is vegan. This was one of the first things I said to our caterer and to a sponsor who might bring breakfast.

People who know me know that I eat - and even love - meat. So why was vegan catering so important to me?

...Inclusive to the Largest Number of Contributors

Our code of conduct says:

A primary goal of SoCraTes Day Linz is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible.

I always wanted to create a conference where everybody can come and where everybody feels welcome. We tried this with Advance IT (which we had to cancel) and we are trying it here. This means we have to be very careful about not giving people a reason not to come.

People come to a conference because of what they can learn, because of what they can share, and because of the connections to other people they can make. I think we have that covered: The other SoCraTes conferences around the world are great - even awesome - in all three respects, and we hope (and give our best) to bring this spirit to Linz.

So, I think that we can attract a lot of developers, but I also think that some things could prevent certain people - especially people from groups underrepresented in technology - from coming. And I want to avoid those things...

Safety

SoCraTes is a great learning experience. We want people to share what they know and learn from others. People will only share and be open to conversations when they feel safe.

This is why we have our code of conduct and why we communicate it regularily through multiple channels.

We were already criticised because we do not define unwanted behaviour clearly enough, but I really want to keep the positive tone in our CoC. But this does not mean that we will tolerate harassment or sexualized content. If anyone thinks that there is room for interpretation in "Be welcoming, friendly, and patient." or that they can test the boundaries, we will happily expel them from the conference.

Food

Almost everyone in our tartet group would probably be happy with a typical Austrian "Wiener Schnitzel" or "Schweinsbraten" - and even accept fast food. But there are people who cannot or do not want to eat meat for several reasons. Others want to eat healthy food or high quality food. And we do not want to exclude them from the conference.

Luckily for us, there is a great restaurant at our venue, and they will cook really good vegan food for us. We will have a self-service buffet for lunch and dinner at the restaurant. We will also have a vegan breakfast.

Language

Almost everyone in our tartet group would probably be OK if we had the conference in German. But there are some people in Linz for whom German is not their first Language. And maybe some people from other countries want to come. And we do not want to exclude them from the conference.

So the main language (the greeting, facilitation and online communication) of the conference will be English. If you host a session, you can have the session in German if there are only native speakers in the audience. But we kindly ask you to switch to English if any visitor of your session would prefer English.

Location

Almost everyone in our tartet group would probably be OK with any venue in Linz (because I guess most of our participants will be from Linz). But it would also be great if we had some visitors from farther away. And we want to make it easy for them to come.

So we chose a venue near Linz Hauptbahnhof. The bus from Linz Airport and buses from Graz and Prag stop right next to the venue. There are fast train connections from Vienna, Salzburg, Munich and Nuremberg. Hotels are in walking distance or can be easily reached by tram or bus. If you come by train from Vienna or Salzburg, you might even not need a hotel at all (but then it will be a long day).

Accessibility

Our venue, Wissensturm, has great accessibility features. It was planned with accessibility in mind and with the help of handicapped people. You can find out more here (in German): Wissensturm Linz - Barrierefreiheit.

What Else Can We Do?

So, what else can we do to make sure we are "inclusive to the largest number of contributors"? Please send me your suggestions.

Pages

My name is David Tanzer and I have been working as an independent software consultant since 2006. I help my clients to develop software right and to develop the right software by providing training, coaching and consultanting for teams and individuals.