In this article, we present the views and perspectives of many of the attendees who bloggedabout QCon, so that you can get a feeling what the impressions and experiences of QCon San Francisco (November 2009) were. From the first tutorials to the last sessions, people discussed many aspects of QCon in their blogs. You can also see numerous attendee-taken photos of QCon on Flickr.

This QCon was InfoQ's eighth conference and the third annual in San Francisco. The event was produced in partnership with Trifork, the company who produces the JAOO conference in Denmark. There were over 500 registered attendees, with 75% attending from North America and 25% from Europe, Asia and South America. Over 100 speakers presented at QCon San Francisco including Douglas Crockford, Martin Fowler, Eric Evans, Erik Meijer and Yukihiro "Matz" Matsumoto. QCon will continue to run in the US around November of every year, and QCon London will be running March 10-12, 2010.

Joshua spoke a lot on the agile process and what it takes to design software which is reliable, release frequently and most importantly constantly adds value to your “customer” with almost every release. … The way that they do this is decide after a release what customer requests they are going to release that week and when they’ll be able to release them that week. This can lead to release your software to production more than once some weeks and none some weeks. This can certainly have positive side effects, especially in your release process.

One point they hammered again and again was that you need to keep your syntax, your semantic model, and your execution separate. For example, if you were to write a new kind of Spring configuration file format in JSON, then JSON would be your syntax, the Spring BeanDefinition interface would be your semantic model, and the Spring GenericApplicationContext might be your executing code. Many implementations of DSLs might be tempted to leap directly from parsing the input to calling code on the fly but according to the presenters that usually leads to heartache as your DSL becomes more complex.

They also went into detail about the difference between an External DSL (something you have to write a parser for, like Ant) and an Internal DSL (basically helper functions on top of an existing language, like Rake).

The best technical tidbit I got out of the session was some insight about Collections classes that have a medium-term life. It’s possible to have a Collections class (say a LinkedList) live just long enough for some of its plumbing to make it into the “old” generation. If that happens, then even if that LinkedList goes out of scope and is eligible for garbage collection, it won’t actually get collected until the next FullGC. Partial GCs only get the “young” generations. That’s not necessarily the end of the world until you consider that many of the objects contained in the LinkedList might have been very short-lived and might be in the young generation heap. But they can’t be garbage collected because the LinkedList still refers to them. These are known as ‘zombie’ objects–objects that aren’t referred to any more, but never-the-less won’t get collected in a Partial GC.

Keynotes

On Wednesday, two venture capitalists spoke about what they looked for in startups when deciding where to invest. The process they described resembled the Lean Startup ideas of iterating as fast as possible with direct customer involvement. Interestingly, they mentioned Scala and Groovy as developments that they as VCs keep a close eye on.

Rely on one of the disruptive forces (OSS, SaaS, Cloud). Leverage the low marketing cost of a community-driven project to gain fast awareness (mostly through word of mouth).

Start with small components (feature vs. platform) and grow slowly through the value chain. An exception to that example is JBoss - JBoss owes its success to the adoption of J2EE. It is therefore less likely that this model can repeat itself as there is nothing similar to J2EE on the horizon.

Once you get to the right level of adoption, you need to start building value quickly to be able to monetize on the community. The right acceleration model is acquisition of other tools in that area.

Focus first on adoption (at the expense of short term revenue), and monetize later. It is very likely that when you start to build your community, you wont have a clear answer on how and where the monetization will happen. The answer often comes somewhere down the road. It is very likely that it will involve a long trial and error experience until you figure out the right combination that will drive revenue out of your community.

The talk by Michael Feathers was good. He talked about what makes a good programmer and what qualities exist in people that make them good programmers. One of the stories he quoted was popularized a last year. The story was the result of a study that determined good programmers are people who are comfortable with meaninglessness. What this means is that people who were tested to fail at programming tasks look for deeper meaning and aren’t comfortable with a computer that blindly executes what you tell it to. He made reference to Beddy Edwards excellent book, Drawing with the Right Side of the Brain. Don also quoted Paul Graham, Malcolm Gladwell, and others. “People who have a sense of taste can write good code.” On the SR-71… “…something beautiful can perform the same way.” Read voraciously. Accept ambiguity.

Dan North’s talk was a freaking riot. The guy should do stand up. Seriously. His content was solid and he kept the audience’s attention. Some tidbits included advice on introducing “pair programming”: if your team resists “pair programming” and you think it necessary, just call it “helping” and park yourself there for the day. He also said, “A bunch of alpha’s aren’t going to pair. But they are more than willing to show off what they know. So, take advantage of that.” Also, many people at QCon were down on Maven. Dan was too, saying, “Maven is like an obsessive compulsive, comes into your house, brings in apache kids, and begins to rearrange everything.” Also, Dan said an essential role in any team is the role of Shaman, someone who knows the history and who knows why this and that happened. Oh, and ESB’s are only good if they’re beer. Anytime a speaker mentions beer, they have my attention.

In Lessons learned from Architecture Reviews Rebecca Wirfs-Brock discussed not only how to perform an architecture review, but also how to effectively present an architecture depending on your audience and the political environment.

Rebecca Wirfs-Brock presented a thoughtful look at the process of an architectural review. Lots of talk about getting through logic bubbles, knowing your audience, etc. Two interesting things I wrote down. First, when presenting a few choices and doing pros and cons, remember to summarize these in a succinct sentence before making any decisions. This draws the true value out of the different options. Also, when probing for answers from a domain expert, be patient, endure long stints of silence, and keep probing.

Wirfs-Brock opened with two slides showing two different ideas of “collaborative.” In one, all the stakeholders and reviewers of an architecture gather together in harmony and all are shooting for the same goal for the common good. In the other, they only collaborate in the sense that the conquered collaborate with their occupying army. It’s important to know which kind of situation you’re in before picking your toolset to deal with it. I was a little shaken when she showed a slide of my boss’ book and said it was an example of a toolset to use in the occupying army kind of collaboration. What does that say about my day job?!

My best takeaway from the talk is that it’s useful to clearly organize your architectural feedback into buckets:
1) Recommendations — we really think you need to do these and not doing them would be a mistake
2) Suggestions — if you do these I predict they will make you happy, but you won’t miss them if you don’t do them
3) Observations — a place to put statements about perceived problems that aren’t really problems, or point out good choices that should be kept

After the keynote, I went off to see a talk by Joseph Yoder, a consultant who helps people write software. He wrote a paper called Big Ball of Mud and he referred to this throughout his talk. His talk was about architectural patterns, you know stuff like client-server, model-view-controller, etc.. His favorite pattern was the Adaptive Object Model and you can read a lot more about that on the Adaptive Object Model website!

Next up was Nathan Dye, who talked about continuous deployment. He claimed that Microsoft can deploy up to 200 times a day. A friend tweeted me and said that was often enough to catch every “oops” check-in that happens in a day. What Nathan was describing is that when you can deploy so easily that any check-in can be built, tested, packaged, and deployed, your software is bound to be maintainable. He went into many all of the techniques that can help companies gain the benefits of continuous deployment.

Day 1 was rounded up by a fantastic talk by Eric Evans. Eric was by far the best presenter, IMHO. I might be saying that only because I’ve shared a beer with Eric on a few occasions since he consults with my company. Like Dan, Eric had the audience laughing, but not at jokes… at scary truths that everyone in the audience was familiar with. Eric’s talk was an introduction to Domain Modeling, the topic of his excellent book on Domain Driven Design.

You don’t know scale like these guys know scale. Many of the presenters were talking about applications at truly mind-blowing scale. Historically, that kind of scale would only apply to secret government operations and serious physics research.

Cheap and horizontal, not expensive and vertical. Every one of these guys that operate at scale do so on commodity or almost commodity hardware. I didn’t hear anyone mention 64-way Sun servers, except as a joke.

Asynchronous interaction and coupling. Applications have to be designed for asynchronous interaction. That means not only between tiers, but also with the user. To get the kinds of performance and resiliency gains many of these sessions were talking about, you can’t do it any other way. Also, asynchronicity helps take advantage of future innovations like cloud computing.

One of the most impressive tracks at QCon is Architectures you've always wondered about. I'm always inspired by seeing how sites such as eBay or LinkedIn handle their scaling requirements. This year's session about Amazon S3 was equally impressive, describing design patterns for building applications that can guarantee uptime in the face of hard drive, network or machine failures. To test their service, Amazon regularly holds unannounced "game days" where they take an entire datacenter offline to see if their architecture can handle it. That is some serious testing.

Amazon S3 has a goal of 99.9% uptime. I was surprised to hear this since I thought enterprise-class uptime was 5 9’s (99.999%). Anyway, Jason went over the main reasons for failure and exhaustively went through the resolutions for each:

Facebook handles 200GB/day worth of updates coming in and 12+TB per day if you include derived data. That’s a lot of data and has no hope of fitting in a traditional data warehouse like Oracle. Consequently, they use Hadoop for both data storage and data processing, as do many organizations that work at that kind of scale. But once they started doing that, they ran into the problem that it’s very difficult, especially for analysts, to conduct ad-hoc queries over the data.

So, the Facebook team created HIVE as a SQL-like layer over Hadoop to allow for ad-hoc analysis. HIVE is an open-source sub project of Hadoop. They spent most of the talk describing HIVE and some of the clever ways they use Hadoop and map-reduce to execute SQL-like queries in parallel.

To give you an idea of the kind of load they’re putting through the system, they said they have a production Hadoop cluster with 5800 cores, 8.7 PB (/3 for replication) of data. Over this cluster they run ~7500 HIVE jobs per day. Wow. That’s not just massive scale, that’s mind-blowing scale.

LinkedIn is a 90% Java shop with lots of memcached for caching and ActiveMQ for messaging. They said they started the traditional way with big relational databases and n-tier architectures, but quickly ran into the scale wall. To give you and idea what they’re talking about, they do 35 million updates per week and 20 million services calls per day….

Then they went on to describe some of the infrastructure they user. Interestingly, as updates come in, they are stored in two places:
level 1 storage: temporal, rolling store on Oracle containing CLOB data with varchar keys
level 2 storage: tenured data on Voldemort containing key-value pairs

Miller works for Terracotta and so most of what he concentrated on was EHCache and Terracotta. Much of the session had to do with configuring and using each of those tools, but I did get a couple of good reminders about what’s good to cache and what isn’t. Specifically, before caching something, make sure it has good “locality” (i.e. the same piece of data tends to be asked for in clumpy bursts of time) and a good distribution (i.e. the majority of people ask of a small subset of the total data universe).

Ola Bini spent most of Friday on the DSL track and commented about it on his blog:

My colleague Brian Guthrie started out with a strong hour about internal DSLs in various languages. Ioke got a few code examples, which was fun. After that Neal and Nate Schutta talked about MPS. I haven’t seen this much detail about MPS before so it was helpful.

Don Box and Amanda Laucher did a talk about the technology formerly known as Oslo. I didn’t think this tech was anything cool at all until I saw this presentation. In retrospect this was probably my favorite presentation of the conference. What came together was how you can use M as a fully typed language with some interesting characteristics, and also the extremely powerful debug features. It’s nice indeed.

… Magnus Christerson from Intentional showcased what they’ve been working on lately. Very impressive stuff as usual.

M is a language unto itself, it’s not the Lex and Yacc model where you have a spec and you spit out C on the other end. You can build a SQL database with M. You can use M to generate ORM layers. You can dynamically validate data in your own programs. The cool thing about M is that it’s dynamic. It’s also a CLR language so it inter-operates with all of your code (assuming you target the CLR!). It allows you to define DSL’s and validate them at runtime, nay, traverse them, violate them, and do whatever you like. All in all, seems very powerful and the audience seemed to “get it” toward the end.

The session started off with Amanda telling Josh to effective shut his trap as Josh started talking before the mic was ready. I was kind of shocked until I realized that this F# programmer has some bite. The pair then launched into a beginner-grade introduction to the language. This was fine for me since I hadn’t seen any F# code until now. In college I had exposure to Lisp, ML, and Prolog so I knew what functional languages were about. However, that was a long, long time ago at the University of Buffalo and I think I pulled a 2.0 average that year because I was out getting drunk all the time. Anyway, F#, like any .NET language, has full access to the CLR and anything it provides so it’s an important language from that perspective. A few colleagues of mine,Pavan Podila and Kevin Hoffman, wrote an excellent book called WPF Control Development Unleashed and one of their readers went and translated all of the WPF code samples into F#. Neat.

There was a lot of interest in lean software development and kanban as well. I attended presentations by Jeff Patton and David Laribee which provided useful practical tips and real-world experience. Though I didn’t attend it myself, I heard a lot of good things about Henrik Kniberg’s talk as well.

IMHO, Eric was best speaker at #qcon. … Eric challenges: modeling is not an up-front investment that pays off. It helps you get there in the first place. He also challenges the old agile adage: “The simplest thing that could possible work” is often interpreted as “quickest thing that could possibly work.” Eric says that it would be better stated as, “what is the most concise, clear, and most easily understood way to do this”… His point was to do that requires real work.

Another notable quote: “Typical UML is not a good representation of the model as it is a visual representation of the program. A model, in contrast, is a collection of assumptions, rules, and choices that led you to write the program that way.” Well said, Eric.

The very next talk, after the Asgardian, was with Adam Wiggins of Heroku fame… now I know you’re thinking of Andrew Wiggin from Ender’s Game, but this guy was not a 6-year old kid. He had a goatee and possibly earrings. But he was smart… so maybe he could destroy an entire alien civilization from across the galaxy. Adam’s talk was awesome, he took us through lots of nifty technologies you can use to make it perform well at scale: memcached, couchdb, mvcc techniques to data, hadoop, redis, varnish, rabbitmq, erlang, etc. Great talk, plenty of reading material to take home to Colonel Graff.

Next talk was on Cloud computing by Michael T. Nygard. After reading that name and seeing the 6 ft. 4 in. dude on the podium, I half expected him to cry Valhalla and pull out a two-handed sword and engage a party of ogres. But then he started to talk about cloud computing and I forgot all about the ogres. An interesting piece of info on CERN. A typical experiment generates a gazillion data points which can’t be stored in any one location for the power needed for the datacenter would dwarf the power needed for the actual experiment. What does CERN do? They put in big fat data pipes and get the data the hell out of dodge before it reaches critical mass, spawns stranglets and consumes the Earth. It’s true. The data then fans out to successive tiers in a complex network of supercomputers where the data is digested so we can find out how to harness the power of the sun or blow up the moon. The whole cloud computing talk dovetails with Nathan Dye’s talk from a day earlier and sessions about computing in the cloud was a hot topic at QCon.

Backers of JRuby have lately claimed serious traction in the enterprise for this Ruby implementation for the Java Virtual Machine (JVM). This week, JRuby project co-lead Charles Nutter pointed to packed sessions at the annual Qcon developer conference in San Francisco as evidence supporting that claim.

Vendors

I should have known from the title that this talk was high-risk. Alex Zitzewitz’s talk was about cyclic dependencies and how hey are bad and how a six-sigma approach can solve all of your problems. Alex was a decent presenter and not too pretentious, but I kind of shut off once the presentation went into a product demo. Still, if you’re a Java shop, the SonarJ tool might be an interesting tool. QCon invites vendors to pay the bills, but vendor presence is definitely suppressed when compared to other cons I’ve been to (VSLive, VMWorld, GDC, etc.)

That evening, QCon hosted a party an Jillians. We got two drink tickets and a whole crapload of bar food. Me and my buddies played pool and met up with Eric Evans, who introduced us to other clients of his. I didn’t know people still programmed mainframes! But, they do since these guys told me they did and I believed them. All joking aside, this is exactly the kind of environment where the domain-driven design philosophy can work magic, helping companies avoid wasting millions of dollars trying to displace a legacy system that works perfectly well.

The entire experience was a rip-roaring success. First we got to meet some of the leaders in the Kanban community, and even got to spend some quality time talking to them in between sessions.

We gave lots and lots and lots of demos, and the touch-screen was very popular. One guy brought a colleague of his back and basically gave the demo himself, and another attendee told us he stepped out of a session he was in to come see us…because the crowd was too big in between sessions for him to get a good look.

And along the way, we even got to meet some of of “heroes”, like Eric Evans, Martin Fowler, and Douglas Crockford.

The wonderful folks at qCon even let us keep the banner you see in the background. Thanks to Floyd, Roxanne, Geeta and all the other folks at InfoQ and Trifork that helped make the experience such a great one for us. We’ll definitely be back.

If you’ve never heard of QCon before, the conference bills itself as a conference for and by software developers and architects of any “denomination”. Walking the hallways you are equally likely to encounter a Rubyist, Javaan or .NET aficionado. This refreshing diversity leads to interesting presentations and coffee-corner conversations.

Takeaways

The QCon conference in San Francisco has always been one of my favorite conferences. Floyd is doing a great job of bringing an interesting blend of people from across the spectrum of the industry (Java, .Net, Ruby) into one place. He also brought some interesting speakers that you don’t normally see at this type of developers' conferences, such as the VC’s talk, which I found particularly interesting. This conference is a great environment to open your mind to other ideas and thoughts outside of your day-to-day realm. It took me a few days to let all the experiences from the various discussions in the conference sink in.

teaching people the craft of agile software development. Presentations discussing personal traits required for programming and software apprenticeship were the result.

JVM-based languages like Scala and Clojure. This is hardly news, but what surprised me was the level of interest there was from non-Java developers. Several people I talked to told me that they were experimenting with these languages in their spare time while working .NET at their day job.

I was fortunate to speak at QCon San Francisco, CA on November 20 discussing Service Security and my own journey on understanding security but more importantly how services can be hacked. It was interesting when examining the audience to see a mixture of participants but the lack of questions was a little disconcerting. I could take three things from that:
1) Everyone in the audience was familiar with service hacking / security.
2) People are not very familiar and were afraid to ask questions or didn't understand the content.
3) People were not interested.
Since the audience stayed for the entire presentation and questions were basic, I think the majority of the audience was in category 2.

Conclusion

QCon San Francisco was a great success and we are very proud to have been able to offer such a conference. It will continue to be an annual event in both London and San Francisco, with the next QCon being around the same time next year in each location. We also look forward to continuing to run QCon in other regions which InfoQ serves, such as China and Japan. Thanks everyone for coming and we'll see you next year!

Is your profile up-to-date? Please take a moment to review and update.

Email Address

Note: If updating/changing your email, a validation request will be sent

Company name:

Keep current company name

Update Company name to:

Company role:

Keep current company role

Update company role to:

Company size:

Keep current company Size

Update company size to:

Country/Zone:

Keep current country/zone

Update country/zone to:

State/Province/Region:

Keep current state/province/region

Update state/province/region to:

Subscribe to our newsletter?

Subscribe to our industry email notices?

You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.

We notice you're using an ad blocker

We understand why you use ad blockers. However to keep InfoQ free we need your support. InfoQ will not provide your data to third parties without individual opt-in consent. We only work with advertisers relevant to our readers. Please consider whitelisting us.