2016-10-28T09:45:07-05:00http://timkadlec.com/Octopress2016-10-04T12:35:44-05:00http://timkadlec.com/2016/10/chasing-toolsOne of the very first projects I ever worked on as a professional was a relatively large site with tons of legacy code. Legacy code brings many headaches. My favorite example was opening a few pages to find that these pages used not one, not two, but three different JavaScript frameworks!

The developers were overworked and the site had never gotten enough budget to give it the rebuild it needed. Granted, they could have stuck with the original framework included but the problem was that as each of the frameworks faded and gave way to the next one, the ecosystem and community around them online dried up and shriveled.

There’s a happy ending to this story. Eventually, jQuery was used and all the other frameworks were removed (talk about a big performance win!). jQuery never suffered from the same fate as the other frameworks the team had tried to use—its ecosystem only continued to grow and flourish as time went on.

Of course, the snarky side of me would be happy to point out that had they used good old fashioned JavaScript, the problem would have never manifested in the first place.

That isn’t entirely fair though, is it? There’s a reason people build these tools. Tools exist because somewhere someone thought one would be helpful in some way. So they created it and they shared it. And frankly, that’s pretty darn awesome.

To be clear, I don’t think that was the point of the article. The thing is, it’s not the ecosystem that’s the problem. It’s great that we have a plethora of options available to us. It beats the alternative. No, the problem is the way we’ve chased after each new tool that comes along and even more concerning to me, the way we teach.

Our industry loves tools, and not without reason. Tools help us to be more productive. They can help to automate low-hanging fruit that is critical to the success of a project. They can help to obfuscate tricky browser compatibility issues. The can free us up to focus on the bigger, more interesting challenges presented. Tools are generally a “Good Thing”.

Unfortunately our love of tools has lead to an unhealthy mentality where we constantly feel the need to seek out the next great tool to be released.

Build scripts are a fun example. Grunt came out and was really instrumental in getting the front-end community as a whole to give more serious consideration to having a formal build process. Just as people started to adopt it more frequently, the early adopters were already starting to promote Gulp as a superior option. As some developers tried to make the shift, still others jumped on Broccoli. For many, it seemed that just as they started to understand how to use what had been the new best option, an even newer best option would become available.

Mostly, I think the evolution is healthy. We should be iterating and improving on what we know. And each build tool does things a little differently and different people will find one or the other fits their workflow a bit better. The problem is if we blindly race after the next great thing without stopping to consider the underlying problem that actually needs solving.

I don’t know exactly what fosters this mentality, but certainly the way we approach teaching JavaScript (and web technology as a whole) doesn’t help.

If you’ve ever tried to find resources about how to use vanilla JavaScript to solve a given issue, you’ll know what I’m talking about. It’s rare to find a post or talk that doesn’t throw a tool at the problem. A common critique you could hear early in the days of jQuery really taking off was that too many posts assumed the use of jQuery. You’ve likely heard similar critiques of using Sass to demo something where you could’ve demoed it using regular old CSS. When the fictional character in the previously mentioned post responds to a simple question with “you should learn React”, it may be a little contrived but it isn’t uncommon.

Just as each additional tool adds complexity to our development environment, each additional tool we mention when teaching someone about how to build for the web introduces complexity to the learning environment. That, I think, was the point of the post going around. Not that the ecosystem is flawed, not that the diversity of options is a bad thing, but that when someone wants to find an answer to a problem, the response they get frequently starts with “use this tool, then set this up”.

It’s ok—good even—to teach new tools that may be helpful. But when we do so, we need to be careful to present why these tools may be helpful as well as when they may not be. We need to be careful to separate the underlying issue from the tool itself, so that the two do not become conflated. Let people learn what’s going on under the hood first. Then they can make a determination themselves as to the necessity of the tool.

I’ve said it before, but the most valuable development skill to develop is not to learn Node.js. Or React. Or Angular. Or Gulp. Or Babel. The most valuable thing you can do is take the time to learn the core technologies of the web: the network stack, HTML, CSS and JavaScript. The core understanding of the web serves as your foundation when making decisions about tooling.

Those tools are useful in the right context, but you need to be able to understand what that context is. Whenever you come across an issue that needs solving, think about what the underyling problem actually is. Only once you’ve identified that should you consider whether you might want to use a tool to help you address the problem, and which tool that might be.

For the tool itself, there’s a few things you might want to consider. Here’s what I tend to look at:

Who benefits from the use of this tool and how?
Someone has to benefit, or else this tool doesn’t really need to be here, does it? If you can’t articulate who is benefitting and how they’re benefitting, then it’s probably not a tool that needs to be used on this particular project.

Who suffers and how?
There is always a trade-off. Always. Someone is paying for the use of this tool in some way. You could be adding complexity to the development environment or, in the worst case scenario, it could be your users who are paying the price. You need to know the cost so that you can compare it to the benefits and see if its worthwhile.

How does it fail?
I’m stealing this from the fine folks at Clearleft, but I love the way this frames the discussion. What happens when something goes wrong? Like it or not, the web is a hostile environment. At some point, for someone, something will break.

Does the abstraction feed the core?
If it’s a framework or library, does it help to strengthen the underying core technologies in a meaningful way. jQuery to me is a good example of this. jQuery was a much friendlier way to interact with the DOM and some of the work they did ended up influencing what you can use do with JavaScript, and how that should work.

There may be more questions you want to ask (how active the community is, the number of contributors, etc), but I find this is a really good start to help me begin to think critically about whether or not it is worthwhile to introduce another tool into my current environment.

Very often, the answer is no. Which means that when you’re chatting with some developer friends and they’re talking about using this brand new framework inside of a new code editor released last week, you may have to politely nod your head and admit you haven’t really dug into either yet. That’s nothing to be ashamed of. There is power in boring technology. Boring is good.

Have you ever watched someone who has been using Vim for years work in it? It’s amazing! Some joke that the reason they’re still in there is because they haven’t learned how to quit yet, but I think they’re onto something. While some of us jump from tool to new tool year after year, they continue to master this “boring” tool that just works—getting more and more efficient as time goes on.

We are lucky working on the web. Not only can anyone contribute something they think is helpful, but many do. We benefit constantly from the work and knowledge that others share. While that’s something to be encouraged, that doesn’t mean we need to constantly be playing keep-up. Addy’s advice on this is absolutely spot-on:

…get the basics right. Slowly become familiar with tools that add value and then use what helps you get and stay effective.

Start with the core and layer with care. A rock-solid approach for building for the web, as well as for learning.

]]>2016-09-21T10:57:21-05:00http://timkadlec.com/2016/09/joining-snykI remember sitting around with a few friends at Chrome Dev Summit last year. The conversation eventually turned to security. We all agreed about how massively important it was, but we also each acknowledged that it’s not trivial to do correctly. It’s not the most accessible topic and the tooling and standards can be a bit unwieldy.

But moving to HTTPS, while important, is just one tiny step in what it really takes to make sure that the people using our sites and applications are safe. If the web is really going to be secure by default, then we need many more tools and standards along a similar vein. We need security to be demystified.

Maybe that’s why when Guy was showing me the first incarnation of Snyk I was so impressed. He and his team had created a tool that focused on one part of the security equation—how to make sure you’re not unknowingly introducing vulnerabilities while using open-source code (focusing on Node initially)—and made addressing that part pretty trivial. Each feature they built on top only made me more and more impressed.

I found myself talking about Snyk casually to friends, each time seeing them respond with the same sort of enthusiasm I had the first time I used it. I’m not one to get super excited about tooling very often, but I do appreciate a well-built tool that makes important things easier.

After many conversations, coffees and other drinks, I decided to take the leap and join Snyk. I’m going to be starting and leading developer relations there. I’ll be rolling up my sleeves and getting my hands dirty with code quite a bit (I’ve got a long list of things I want to build)—something I’m looking forward to.

Several friends who I told about my move all asked the same question: “Does this mean you’re not going to focus on performance anymore?” The answer is: of course it doesn’t. You’re not getting rid of me that easily.

I’ve always considered myself a “web” person, not a “performance” person. I talk about performance so much because it interests me and I think it’s critical to the success of web. I still do and am unlikely to stop thinking so anytime soon.

The team at Snyk is doing important work—work that I want to help with. I’ve talked to them about what they have in mind for the future, and it’s pretty exciting. That, plus the fact that Anna promised to bake me cakes (I have a massive sweet-tooth), made this an opportunity I couldn’t pass up.

Last Friday was my last day at Akamai. Before I joined, I already had a tremendous amount of respect for Akamai. Leaving, I have even more. In a segment of our industry that I worry can be a little shortsighted at times, they continue to think bigger—investing in the web as a whole through standards and browser involvement. In addition, they are smart. I mean, really, really smart. There’s a reason they’ve been around as long as they have.

The only place to go after a company where you are surrounded by brilliant and passionate people is another company filled with brilliant and passionate people. Snyk’s team is absolutely top-notch and I’m looking forward to working with them to make it easier for the web to be secure by default.

]]>2016-04-14T08:46:35-05:00http://timkadlec.com/2016/04/the-taxi-rideI head out of the airport in San Francisco and grab a taxi. I consider myself an outgoing and social person, but I’ve just spent six hours or so crammed next to a bunch of strangers in a combination of airports and planes. All I want to do right now is hang in the back seat of this taxi, enjoying 45 minutes of quiet.

You never know with taxis though. Sometimes, the driver will ask where I’m headed and then stay quiet the rest of the way—the two of us physically in the same car but mentally somewhere else entirely. Other times the driver will want to make small talk. We’ll talk about where we’re each from, what the weather is like back home, how many kids we have and how long I’ll be in town for.

Today, it turns out, is not going to be quiet ride.

The driver—a middle aged man—and I take turns talking about where we live, the weather, all the standard fare.

He asks if we play football where I’m from. Soccer. He corrects himself remembering I’m an American and we made up our sort of football just to be difficult. I tell him that yeah, actually, soccer is pretty popular at home. He asks if any area teams need a coach. I tell him I’m not sure.

He goes on to tell me how much he loves soccer. How he’s always loved playing it, coaching it. He tells me about how, the other day, he was practicing and offered to teach a few tricks to a twenty-something year old who was nearby. She said sure. She was good, but he started to show her a few things she hadn’t known. He challenged her to a race and won. He raced another twenty-something (her boyfriend if I remember correctly)—he a bit more confident in his abilities but in actuality not as good as the young woman. He beat him as well.

He tells me how there is a level of art to the game that most casual fans don’t appreciate. How if you go back and watch the greats, you see a sophisticated grace. He compares it to Steph Curry this year or Michael Jordan (who he believes is still the epitome of basketball perfection) and how they transcend the sport they play in—how they see things others don’t and move in ways others don’t.

He tells me he wants to coach soccer professionally someday. I smile and say that’s great, but internally I know that’s a long shot. I always wanted to coach basketball professionally. Everyone has a pie-in-the-sky dream like this at one point or another in their life, but that doesn’t mean they come true as planned.

Maybe sensing what I’m internalizing, he insists. He tells me he knows he will. He firmly believes that, in the United States, he can do anything. If he puts enough time and energy into it, and if he stays patient and focused, there is nothing he can’t accomplish here. It’s the same old American dream that we’ve heard many times—though I have to admit I haven’t heard it as often lately.

He elaborates. It turns out he believes this because he’s done it already.

The taxi driver’s name is Ahmedin Nasser. He moved to the United States from Ethiopia in 1985. At the time there were no freely available public libraries in Ethiopia. None. After graduating college, Ahmedin decided to change that. He and 12 friends organized Yeewket Admas with the goal of bringing free public libraries to Ethiopia.

He rounded up $15,000 in donations and sent a 40-foot container of books, 11 computers and 4 printers back home. His organization is responsible for at least eight different libraries in Ethiopia now.

He tells me he felt he needed to give back—that we all have a responsibility to do that. He’s a firm believer in a quote he once heard attributed to Albert Einstein: “The value of a man should be seen in what he gives and not in what he’s able to receive.”

He hands me a laminated newspaper clipping from an article that was written about him. He’s proud of this, and rightfully so.

Proud, but not content.

He’s currently working on the next step of his vision—ensuring that there are more libraries setup in Ethiopia and that these libraries will be properly maintained and sustainable.

I ask why libraries…why books. He tells me it’s because books can transform people. He tells me that we take it for granted in the United States that we have free access to a wealth of knowledge. He goes on to talk about how much he loves books and that he believes that one of the most important things you can do for a young child is introduce them to the love of reading.

I mention The Reading Promise to him, and he asks me to write the title and author down so he can grab a copy. He starts to tell me about a young boy in Africa who couldn’t afford to go to school, yet through a book learned how to build a windmill to bring electricity to his village. I mention that he has written a book (The Boy Who Harnessed the Wind), so I write that down for him too.

He thanks me and tells me that books are something he will never hesitate to indulge in. He says he’ll happily go a day without eating if it means he can buy a couple great books. He is a Muslim. Fasting is part of his religion so going a day or two without food is not a difficult thing to do—in fact he finds it revitalizing.

By the time we pull up to my hotel—45 minutes after stepping into this taxi feeling exhausted and worn down—I’m revitalized as well. I thank him for the amazing conversation and ask him if he minds if I type up some of it. He’s more than happy to let me. He says he wants everyone to know that someone ordinary can do extraordinary things.

In 2014 there was a research paper that concluded that people who interact with strangers when they’re traveling (whether by train, plane or taxi) are happier than those that do not. I’ve always had my doubts.

But at least on that day, in San Francisco, talking to Ahmedin Nasser—it was true.

]]>2016-02-24T12:39:24-06:00http://timkadlec.com/2016/02/a-standardized-alternative-to-ampIt’s no secret that I have reservations about Google’s AMP project in its current form. I do want to make it clear, though, that what bothers me has never been the technical side of things—AMP as a performance framework. The community working on AMP is doing good work to make a performant baseline. As with any framework, there are decisions I agree with and some I don’t, but that doesn’t mean the work isn’t solid—it just means we have different ways of approaching building for the web.

But that’s the beauty of the web, isn’t it? It’s not just that anyone, anywhere can consume the information on the web—that’s fantastic and amazing, but it’s not the complete picture. What makes the web all the more incredible is that anyone, anywhere can contribute to it.

You don’t need to go through some developer enrollment process. You don’t need to use a specific application to build and bundle your apps. At its simplest, you need a text editor and a place to host your site.

That’s it. The rest is up to you. You can choose to use jQuery, SASS, React, Angular or just plain old HTML, CSS and JavaScript. You can choose to use a build process, picking from one of numerous available options based on what works best for you. Certainly everybody has their own opinion on what works best but in the end it’s your choice. The tools are up to you.

That’s not the case with AMP as it stands today. While I’ve heard many people claim that the early concerns about tying better methods of distribution to AMP were unfounded, that’s the very carrot (or stick depending on your point of view) that they’re dangling in front of publishers. There have been numerous rumblings of AMP content being given priority over non-AMP content in their search engine rankings. Even if this ends up to not be the case right away, they have certainly emphasized the need for valid AMP documents in order to get into Google’s “search carousel”—something any publisher clearly would like to benefit from.

This differs from similar announcements in the past from Google about what they prioritize in their search ranking algorithms. We know they like sites that are fast, for example, but they’ve never come out and said “You must use this specific tool to accomplish this goal”. Up until now.

By specifying the specific tool to be used when building a page, Google makes their job much easier. There has been no simple way to verify a certain level of performance is achieved by a site. AMP provides that. Because AMP only allows a specific set of elements and features to be used, Google can be assured that if your page is a valid AMP document, certain optimizations have been applied and certain troublesome patterns have been avoided. It is this verification of performance that gives Google the ability to say they’ll promote AMP content because of a better experience for users.

So when we look at what AMP offers that you cannot offer yourself already, it’s not a high-performing site—we’re fully capable of doing that already. It’s this verification.

Content Performance Policies

I’d like to see a standardized way to provide similar verification. Something that would avoid forcing developers into the use of a specific tool and the taste of “walled-garden” that comes with it.

There were several discussions with various folks around this topic, and the option I’m most excited about is the idea of a policy defined by the developer and enforced by the browser. We played around with name ideas and Content-Performance-Policy (CPP) seemed like the best option.

The idea is that you would define a policy using dedicated directives (say….no hijacking of scroll events) in either a header or meta tag. The browser could then view this as a “promise” that the site adheres to the specified policy (in this case, that it doesn’t hijack any scroll events).

If the site then tried to break its promise, the browser would make sure that it cannot (e.g. ignore attempts to cancel the scroll event). An embedder, such as a search engine or a social network app, can then be certain that the “promises” provided by the developers are enforced, and the user experience on the site is guaranteed not to suffer from these anti-patterns.

CPP directives could also be used to control what third parties can do on a given site, as well as a way for third parties to provide guarantees that they will “behave”. This way, content owners can be sure that the user experience will not contain obvious anti-patterns even if the page is pulling in scripts and content from a large number of arbitrary third-party sources.

CPP could borrow from the concept and approach of the already existing Content Security Policies (CSP). This means that there would likely be a reporting-only mode that would allow sites to see the impact the policy would have on their pages before applying it live.

CPP’s would free developers up to use their own tools again and avoid limiting them to the specific subset of web technologies that AMP imposes. Because it uses a set of definable policies versus a specific framework, there is much more flexibility in how browsers and apps choose to enforce and promote content. For example, an app may choose to look for a certain set of policies that would work best for its context while Google may prioritize an entirely different set of policies when considering how a page should be prioritized in their search-engine. It’s far more extensible.

You could also imagine smarter content blockers that let through ads and other third party content guaranteed to be fast and not interfere with the user experience, while blocking third party content without these guarantees. That would allow us to avoid the centralized model of things like the Acceptable-Ads program, while providing a standard way to have the same benefits.

So…what happens to AMP?

There are too many smart people building AMP to let all that good work go to waste. If we decouple the distribution benefits away from the tool, then suddenly AMP becomes a framework for performance—something it is far better suited to. Developers could choose to use AMP, or a subset of its features, to help them accomplish their performance goals. The difference is that they wouldn’t be forced to use it. It becomes an option.

I’m working with Yoav Weiss to create a formal proposal for CPP’s that can be shared and built up. There’s an extremely early draft up already, if you would like to take a look. We’ve discussed the idea with numerous people from browsers and publishers and so far the feedback has been positive. People like the more standardized approach, and publishers in particular like that it feels more open and less like something they’re being forced into.

The idea of CPP is still young and nearly all discussion has happened behind closed doors. So this is us putting it out publicly to get people thinking about it: what works, what doesn’t, what could make it better.

I like the work AMP has done from a technical perspective, and I love the ambitious goal of fixing performance on the web. Let’s find a way of accomplishing these goals that doesn’t lose some of the openness that makes the web so great in the process.

]]>2016-01-04T12:52:33-06:00http://timkadlec.com/2016/01/hsts-and-lets-encryptI recently gave the Let’s Encrypt client a try and wrote up how that went. One of the follow-up questions that popped up was about HTTP Strict Transport Security (HSTS) and whether Let’s Encrypt’s helps with it. Since the question came up several times, I thought it would be worth writing up.

What is HSTS?

While the SSL certificate is a big boost for security in its own right, there is still a potential hole if you are redirecting HTTP content to HTTPS content.

Let’s say someone tries to request wpostats.com (diagrammed below). They may type it into the URL bar without the protocol (defaulting the request to HTTP) or they have it bookmarked from before it used HTTPS. In this case, the browser first makes the request to the server using a non-secure link (step 1). Then the server responds by redirecting the browser to the HTTPS version instead (step 2). The browser then repeats the request, this time using a secure URL (step 3). Finally, the browser responds with the secure version of the site (step 4).

Trying to load a secure asset using a non-secure URL exposes a gap in security.

During the initial exchange, the user is communicating with the non-encrypted version of the site. This little gap could potentially be used to conduct a man-in-the-middle attack and send the user to a malicious site instead of the intended HTTPS version. This gap can occur every single time that person tries to access an asset using a non-secure URL.

HTTP Strict Transport Security (HSTS) helps to fix this problem by telling the browser that it should never request content from your site using HTTP. To enable HSTS, you set a Strict-Transport-Security header whenever your site is accessed over HTTPS. Here’s the line I added to my virtualhost configuration for wpostats.com:

The header will only be applied if sent over HTTPS. If it’s sent over HTTP it’s unreliable (an attacker could be injecting/removing it) and so the browser will choose to ignore it. As a result, that initial redirect still has to take place. The difference is that now, after the browser requests the content using a secure URL, the server can attach the HSTS header telling the browser to never bother asking for something over HTTP again, sealing off that vulnerability for any future access from that user. As an added bonus, with the redirect out of the way we get a little performance boost as well.

The header has three options (each of which were used in my example above):

max-age

The max-age parameter is mandatory and specifics how long the browser should remember that this site is only supposed to be accessed over HTTPS.

The longer the better here. Let’s say you set a short max-age of one hour. The user accesses your secure site in the morning and the browser now seals up the vulnerability. If they then go to your site using a non-secure URL in the afternoon, the Strict-Transport-Security header is outdated, meaning the vulnerability is wide open again.

Twitter sets their max-age to a whopping 20 years. I chose two years for mine, which most likely says something about me having committment issues or something.

includeSubDomains

The includeSubDomains parameter is optional. When included, it tells the browser that the HSTS rules should apply to all subdomains as well.

preload

Some of you may have noticed a kink in the HSTS armor. If a user has a fresh local state for any reason (max-age expires, haven’t visited the site before) then that first load is still vulnerable to an attack until the server has passed back the HSTS header.

To counter this, each major modern browser has a preloaded HSTS list of domains that are hardcoded in the browser as being HTTPS only. This allows the browser to know to request only HTTPS assets from your URL without having to wait for your web server to tell them. This seals up that last little kink in the armor, but it does carry some significant risk.

If the browser has your domain hardcoded in the HSTS list and you need it removed, it may take months for the deletion to make its way out to users in a browser update. It’s not a simple process.

For this reason, getting your domain included in the preload list requires that you manually submit the domain and that your HSTS header includes both the includeSubdomains parameter as well as this final preload parameter.

Does the Let’s Encrypt client enable HSTS?

The Let’s Encrypt client can enable HSTS if you include the (currently undocumented) hsts flag.

./letsencrypt-auto --hsts

The reason why it’s not enabled by default is that if things go wrong HSTS can cause some major headaches.

Let’s say you have HSTS enabled. At some point something (pick a scary thing…any scary thing will do) goes wrong with your SSL configuration and your server is unable to serve a secure request. Your server cannot fulfill the secure request, but the browser (because of the HSTS header) cannot request anything that is insecure. You’re at an impasse and your visitor cannot see the content or asset in question. This remains the case until either your SSL configuration is restored or the HSTS header expires. Now imagine you’re running a large site with multiple teams and lots of moving parts and you see just how scary this issue could be.

Because of this risk, HSTS has to be an option that a user must specify in Let’s Encrypt—despite its importance.

Room for improvement

That’s not to say the process couldn’t be improved. The GUI version of the client currently asks you a variety of questions as you setup your certificate. One of those questions asks if you would like to redirect all HTTP traffic to HTTPS.

An example of the Let’s Encrypt GUI asking the developer to decide whether to make everything HTTPS or keep HTTP around.

If the developer decides to redirect all HTTP traffic to HTTPS, I would love to see the very next question be: “Would you like Let’s Encrypt to setup HSTS?”, probably with a warning encouraging the developer to make sure they have all content on HTTPS.

Defaults matter and most people will stick with them. HSTS is important and HTTPS is…well…incomplete without it. If we’re serious about HTTPS Everywhere then we need to be just as serious about enabling HSTS as we are about making sure everyone is serving content over HTTPS. Finding a way to encourage its use whenever possible would go a long way towards boosting security on the web as well as adhere to one of the primary principles of the Let’s Encrypt initiative (emphasis mine):

Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.

]]>2015-12-30T10:04:57-06:00http://timkadlec.com/2015/12/what-i-read-in-2015It’s that time of year again. The time when readers everywhere post their “best books of whatever-year” and “my favorite books of the year” lists making my pile of books to read grow rapidly. As usual, I’m happy to return the favor.

As I did in the past, I’ve included a rating and short review of each book I’ve read to give both you and I some idea of why I enjoyed each book. You’ll notice that no book has a rating below three stars out of five—that’s because if I am not enjoying a book on some level, I discard it. Life is too short to spend reading books that aren’t interesting.

If you forced me to choose, I’d have to say my three favorite fiction books were: All the Light We Cannot See, Crime and Punishment and Leviathan Wakes. My three favorite non-fiction books were: They Poured Fire on Us From the Sky, How Music Got Free and So You’ve Been Publicly Shamed.

I found myself nodding my head in agreement quite frequently while reading this meditation on the way technology is slowing but surely filling in anything that vaguely resembles a void in our “busyness”. There was one point early on in the book where I worried the author was about to get a little too over-the-top in his critique and concerns, but as it turns out, he comes to a pragmatic conclusion at the end arguing that while every technology can alienate us from some part of life, it is our job “to notice”.

I was torn on how to rate this book. There are sections that are very good and laugh-out-loud funny (particularly much of the quick back-and-forth banter between Jeeves and Bertie) but most of the time the book seemed to drag along. Judging by the overwhelmingly positive reviews of this book (and Wodehouse in general), I’m willing to wager that perhaps I just wasn’t in the right mood for the story.

All the Light We Cannot See is a wonderfully written novel following two primary characters—one a young French blind girl named Marie-Laurie and the other a young German boy named Werner—as they struggle through World War II. In particular, the accounts of Werner’s time in Hitler’s Youth were heart-breaking & moving. But don’t let the heart-break turn you away—there is a lot of genuine beauty in this story as well. A wonderful, wonderful book.

This is a really interesting take on the topic of technology and its influence on the widespread feeling of not having enough time in the day. While most books on the topic place the blame directly on the technology itself, Wajcman digs much deeper. Early on, she points out that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires.” In other words, to really understand how technology is impacting this feeling of busyness, we need to look beyond the technology itself and see what other factors are contributing.

While the writing is certainly quite dry (sort of par-the-course for a lot of University-based publishers), the ideas are fresh, nuanced and well thought out. I do think that referring to studies conducted prior to smartphones to establish how people use mobile phones was a little short-sighted. However, in the end I support the conclusion: “busyness is not a function of gadgetry but of the priorities and parameters we ourselves set.”

While labeled as a “memoir”, Mankoff’s book is actually quite light on the “memoir” side of things. Which is ok by me. Where the book really flourishes is in the discussion of the cartoon process: how the cartoons are created, how they are chosen, etc. In fact, I would have loved to get a little more detail and depth on that side of things. As it is, How About Never is a quick and humorous look at cartoon creation. Mankoff’s writing is very informal and ripe with the kind of humor you would expect from a cartoon editor. Combined with the plethora of cartoons included, it makes for a very fun book to read.

Unbelievable. What these three boys went through, the courage they showed—it left me speechless on so many occasions. They battled hunger, thirst, lions, hyenas, war, prejudice—and they had to do it day in and day out, year after year. This is a hard book to read, but an important one.

It’s easy to see why so many people like this book. It’s well-written and the ideas it is trying to get across are important. I mean, when reading this you can’t help but be inspired by Atticus and the principles that guide his decisions.

The only criticism I really have is that I feel like the story would have benefited from a bit more…friction, I guess. The characters are all pretty shallow: the good people are really good, the bad are really bad. Perhaps that has to do some with the age of the narrating character. But it feels like, given the important topics discussed, there should have been a bit more depth somehow.

That sounds harsh considering the 4 stars, but that’s only because I’m comparing to the high-praise the book is given and the lofty expectations that go with it. The truth is I did enjoy the book quite a bit—I was just hoping for a little more.

I loved getting to spend a bit more time with some of the other characters in this one—Martin is in more of complementary role. The story got a little bogged down for a couple chapters in the middle and that’s the only thing stopping me from calling this my favorite in the series. So much fun to read!

I haven’t read a lot of Stephen King books. In fact, prior to this one, I’ve only read the Dark Tower series. So I can’t speak to the quality of his books that lie more firmly in the “horror” category. But from what I’ve seen, while King gets his fair share of criticism, the truth is he’s a gifted writer with the ability to paint a story so vividly that you get lost in it. *11/22/63& is a great example. There’s some unnecessary fat in the middle (the book probably could have been about 100-150 pages shorter), but he builds the suspense and intrigue masterfully throughout. It’s an interesting take on time travel and a true page-turner with an ending that you can’t help but be moved by.

There is so much to like about this book. I loved all the behind the scenes stories Bailey tells. The insight into the process of writing for the Muppets, Sesame Street and comedy in general are pure gold.

The only reason this doesn’t get 5 stars is because of the poor quality of the book itself. The text changes sizes for no reason at all throughout the book. I started reading the paperback but ended up buying the Kindle version to avoid the bizarre text issues. Whoever did the editing also did not do a good job—and I say this as someone who usually doesn’t notice those sorts of things. I hope someone republishes this at some point with a little more attention given to it because the actual substance here is wonderful.

How We Got to Now builds off the ideas from Johnson’s Where Good Ideas Come From and further hammers home the point that good ideas do not come from lightbulb moments but from what Johnson calls the Hummingbird Effect. For each chapter, Johnson focuses on a different innovation (glass, cold, sound, etc) and shows you how it is connected to innovations you hadn’t considered (from printing press to selfies, for example). If you’re familiar with Johnson’s previous writing you won’t be too surprised by the conclusions here—but you’ll enjoy the winding paths you take to get there.

Mixed Nuts is itself a bit of a mixed bag. It felt that at times he was overly abstract in those discussions. I also felt like he stretched his own definition of “comedy team” a bit too thin in order to include more contemporary pairings—Cheech & Chong, Belushi & Aykroyd, etc.—that didn’t seem to match the term quite as well. And I do think that the wide breadth of coverage held the book back a bit—he’s at his best detailing the success of acts like Laurel & Hardy and Abbott & Costello and I found myself wishing he didn’t have to move on so quickly. Still, it was interesting to read the progression of the comedic team and how successive acts built upon their predecessors and Epstein provides several sharp insights throughout the book. Flawed, but a decent introduction.

Jacobs' book is in sharp contrast to many of the “how to read” books that are out there. The idea of a prescriptive list of books you “should read” appalls Jacobs, and he spends a good amount of time arguing for reading based on your whim instead. Ironically, in arguing against many books that tell you how to read, he sort of ends up doing the same—just from a different perspective. But there’s a lot here that gets you thinking—his disdain of reading lists, his arguments that most of us read too fast. That, and the many interesting anecdotes along the way, make it an enjoyable book.

Rose’s book is a very clearly organized look at how widespread and cheap computing could impact objects from our everyday lives. Much of his ideas are tied directly back to abilities from science-fiction and fantasy, which does offer an interesting perspective. The book doesn’t quite get into the underlying design principles enough for my taste (it’s aimed at a more general audience I’m guessing) and there are certainly a few gimmicky examples, but overall it does get you thinking about the potential of the “internet of things” in a different light.

I loved this series so much! As with the first two books, there’s plenty of laugh-out-loud moments somehow mixing perfectly with tense action. Seeing the dynamics shaken up a bit after the end of book two brought a new level of depth to the characters—in fact, this may be the author’s finest work in terms of character development in the series.

I’ve heard he’s working on another trilogy with new characters but set in the same universe—I cannot wait!

I’ll openly admit I picked up this book looking for evidence for things I already believed. Gray’s book fit the bill very well. He doesn’t just think compulsory schooling—full of worksheets and testing—is ineffective, he builds a case that it is actively harmful and can’t hold a candle to the way we learn naturally: through play.

In the early chapters, he builds the case extremely well. He provides the data, provides a counter point, and then the data to dismiss the counterpoint. Unfortunately a few of the later chapters aren’t quite as thorough and rely a little more on anecdotal evidence. Still, I can’t imagine anyone coming away from this book unconvinced that one of the best things we can do to improve the state of education today is move away from our current model and allow kids more time to play and experiment so that they don’t just learn better, but develop a love of doing so.

Everyone talks about this book so I figured it was about time I give it a read. True to what I had heard, it’s a fabulous book. I am still not sure how Dostoyevsky made such a cruel main character also somehow sympathetic, but he does just that. As is the case with many of the great novels, 8Crime and Punishment* is a rich book with many pauses in the main narrative for philosophical and historical discussions. Some may not enjoy the slower pace, but I find the side discussions fascinating—they add so much more nuance to our understanding of the characters and how they think. I highly suspect that this is one of those books that you can read over and over, picking up new details each time.

A solid finish for a very solid and thought-provoking trilogy. I think Naam’s writing improved as the series went on, though at times book #3 felt a little too sprawling—like there were too many bit characters in play to keep straight.

Still, as was the case with the first two books, Apex forces you to consider the vast implications of pervasive technology that is not as far off as we may think (which was once again backed by a chapter at the end where Naam discusses the real-life technology influencing the book).

RIM is one of the most infamous stories in tech: a company that rose to the very top only to get so stuck in their current vision that they couldn’t see the changes happening around them that would lead to their demise. This is a well-written and engaging account of the rise and fall of RIM and makes for a very nice starting point for understanding what mistakes were made and more importantly, why.

Shades of Grey takes a little bit to get going. Fforde carefully and meticulously builds up this world and the characters in it. But man, the pay-off is SO worth it. The more you learn about the world, the more you get sucked into it. The writing is great, the story is compelling, the characters are vividly brought to life and the world is completely unique. My only disappointment was in finding that book 2 is not out yet (and it’s been 6 years)! Can’t wait to see how the rest of the story plays out.

The plot often felt a little too familiar—like I had been through many of these same sorts of scenes and situations before. Yet I still ended up enjoying it a bit. That I did is a testament to Chu who is very good at mixing action and fun. It’s not nearly as strong as his Tao books, but the potential is there for it to take off in the second book.

Speak starts off strong and the concept has a ton of potential, but it ended up falling a little short. The writing is pretty solid, but I don’t think the various narratives worked together as well as they could have. It felt like there was something important to be said here but the book never quite got around to saying it.

Thoroughly enjoyed this! Witt weaves the story of three central figures—the creators of the MP3, one of the most well-known and successful music executives and one of the most prolific “leakers"—together to create a fascinating look at digital music (and piracy) revolutionized the music industry.

Impro is not just a solid introduction to improvisation, but an important look at how current educational systems tend to drive away creativity and what we can do to bring it back. The chapter on Masks felt a bit….odd…at times, and parts of the book drag a little, but there’s plenty of food for thought here.

I admit that there’s some confirmation bias at play here: I’ve increasingly felt like we are so quick to raise our online pitchforks without stopping to consider what the possible outcome might be. In fact, if I had my way, before you signed up for any social media site you would be required to read this book. Ronson’s style of writing makes you feel like you’re taking the journey right alongside of him as he moves from idea to idea, trying to make sense of shaming and it’s merits and risks. I’m not 100% sold on all the conclusions, but his take is always well articulated and gets you thinking more critically about how you interact with others online.

I also appreciated Jon’s ability to look himself in the mirror and acknowledge his own faults, as well as his own privileges that lessen (though not eliminate) the risk of experiencing the same degree of shaming experienced by some in the book.

I’d love to read a follow-up that dovetails off some of the ideas expressed in the final chapter about feedback loops and the echo chambers created by social media, as I feel like that is key to understanding why we interact the way we do on Twitter, Facebook and their kin.

One note: the book gets a little intense and explicit at parts so if that’s not something your comfortable with, you may want to find another take on the topic.

This was a humbling book to read. Epley walks through all the ways we “think” we understand where others are coming from when in reality we understand very little. The research mentioned is a bit light in parts, but overall Mindwise does a great job of discussing a very important topic.

This simply written book provides a fascinating account of not just one boy’s curiousity and drive, but also of what it’s like to grow up in a small African village (which actually is what the majority of the first half of the book is about). Well worth a read.

Leslie breaks curiosity down into two forms: diversive (shallow; Googling the capital of Australia) and epistemic (deeper; reading books about Australia’s history and economy). His book focuses on epistemic curiosity: why it matters and what we can do better to encourage it. I found Leslie’s analysis to be pretty nuanced and I loved how curiosity wasn’t framed so much as a trait of a person, but instead as a choice. Though I don’t agree with all of his conclusions, especially some of those around education, Curious works as a good overview of the topic of curiosity with plenty of recommendations for where to dig deeper.

Full review here. The short version: Lara and Destiny give wrote a wonderful little guide to setting up a device lab that is equally good for companies of all sizes. They walk through everything you could possibly want to know and more.

I typically really enjoy Shirky’s writing, but this one was a little subpar. While the topic itself is fascinating, it felt like Shirky kind of threw this one together a little too quickly—the connections between the main topic and his tangents were tenuous. There are a few interesting tidbits scattered throughout, but overall the discussion felt a bit shallow.

Full review here. The short version: With all the power WebPageTest provides, there was a huge need for a comprehensive guide to getting the most out of it. Now we have one—a very good one at that. No matter how much (or how little) you think you know about WebPageTest, you’ll walk away from this book with a few new tricks up your sleeve.

Full review here. The short version: Adaptive Web Design should be one of the first books on the shelf of anyone building for the web. Showing a deep understanding of the web, Aaron manages to cram nearly 20 years of insight into a book that is an absolute pleasure to read. I dare you to try and read this book without a highlighter handy.

Another solid book from Hugh Howey, though I do think it falls a little short of the lofty bar set by Wool and Sand. Still, a gripping story of a man battling to maintain sanity while also having to make moral and ethical decisions with very serious consequences.

Full review here. The short version: Karen’s book isn’t going to get super technical—she’s approaching the topics from a higher level which means the audience of people who would benefit from reading this is pretty broad. Going Responsive needs to be read by anyone planning to build a responsive site—designers, developers and (perhaps especially) management.

Full review here. The short version: I love how Josh weaves seamlessly back and forth between the why and the how: here’s why this is the case, now here’s a practical way for you to design based on that knowledge. The book ends up being a mini-master class about designing for touch and gestures.

Full review here. The short version: When I grow up, I want to write as well as Ethan does. His style of writing is just so pleasant: conversational, informative and entertaining. He also, as it turns out, knows a little bit about this whole responsive design thing. Ethan’s pulls from a ton of experience to write an extremely useful book.

A satisfying conclusion to what was a really solid trilogy. The pace picks up here after the slower second book and I found I had a hard time putting the book down. My only critique is that the final conflict felt a tad anti-climatic after such a great build-up. If you like the Ancillary series, you’ll enjoy this finale as well as it has all the same things that have made the other books so good: great dialogue, smart writing, and plenty of tea.

Once a year, a bunch of people get together to judge a bunch of chatterbots on their ability to pass the Turing test. The judges talk a mix of bots and real people and try to figure out who is who. One of the awards handed out is for the “Most Human Human"—the person who was most easily identifiable as a human being based on their chats. Christian sets out to win that prize in 2009, and the result is this thought provoking book about the way we reason, the way we communicate and the complexities of language.

The book is a few years old so some of the bots he praises seem quite poor, but that’s really secondary to the more interesting philosophical discussion to be had here (as well as a nice little anecdote around why those who think philosophy is useless are already philosophizing).

Leviathan Wakes is one heck of a fun read. Corey avoids most of the faults that frequently bog down long, space-opera novels to create a book that’s a page turner from start to finish. He doesn’t beat you over the head with the science, but instead puts more focus on creating characters you care about. The result is a fast-paced novel that is part science-fiction, part detective story. It’s the first in a long series of books, so if you’re afraid of long-term commitment, you may want to look elsewhere.

I’m good for a heat of the moment rant about either standards or Apple (often both) every couple years. This year, it was about Apple’s influence over the standardization process after some fallout around the Pointer Events specification.

In light of Facebook’s Instant Articles feature and FlipKart’s announcement about leaving the web (something they’ve since reversed their stance on), I wrote a post about why the issue with poor performing sites has nothing to do with technical limitations. Performance is a decision. We actively choose whether to devote our time and energy to improving it, or to ignore it and leave it up to chance.

A lot of folks have been very vocally pushing for “HTTPS Everywhere”, and for good reason. Unfortunately, moving to HTTPS can be kind of painful. Let’s Encrypt hit public beta and I walked through using their tool to simplify the process.

Google announced the AMP Project and while I appreciated the focus on performance, I feel their approach (use this specific framework as a way to build a faster version of your existing page and get some enhanced distribution options as result) puts the incentive in the wrong place. I’m still not overly fond of the approach and hope we can find a more standardized solution.

Other years

]]>2015-12-03T13:58:09-06:00http://timkadlec.com/2015/12/taking-lets-encrypt-for-a-spinA lot of folks have been very vocally pushing for “HTTPS Everywhere”, and for good reason. The fact that the lack of HTTPS makes you miss out on shiny new things like HTTP/2 and Service Workers adds even more incentive for those a little less inspired by the security arguments.

Unfortunately, moving to HTTPS can be kind of painful as you can see from Jeremy Keith’s excellent post detailing exactly how he got adactio.com onto HTTPS. He pinpoints the major obstacle with HTTPS adoption at the end of his post:

The issue with https is not that web developers don’t care or understand the importance of security. We care! We understand!

The issue with https is that it’s really bloody hard.

Let’s Encrypt—a new certificate authority from the Internet Security Research Group (ISRG)—has been promising to help with this, pledging to be “free, automated and open”.

They just announced public beta today, so I decided to give the beta version of their system a try on wpostats.com. Like Jeremy’s blog, WPO Stats is housed on a Digital Ocean virtual machine running Ubuntu 14.04 and Apache 2.4.7.

Getting Let’s Encrypt installed

The first thing I had to do was get the Let’s Encrypt client installed. To do this, I logged into the WPO Stats server and followed the instructions on the Let’s Encrypt repo.

First I grabbed the repo using git:

git clone https://github.com/letsencrypt/letsencrypt

Once git had done it’s magic and pulled down the Let’s Encrypt client, I needed to actually install it. To do that, I navigated to the newly created letsencrypt directory and then ran the Let’s Encrypt client with the help flag enabled.

cd letsencrypt

./letsencrypt-auto --help

This does that scary-looking thing where it downloads a bunch of different dependencies and gets the environment setup. It went off without a hitch and after a few moments it completed and told me I was ready to begin.

Obtaining and installing a certificate

The install process was smooth, but I was bracing myself for the actual SSL setup to be a bit more painful. As it turns out, I didn’t have to worry.

To run the client and get my certificate, I ran the same command without the help flag:

./letsencrypt-auto

This popped up a pleasant little GUI (Figure 1) that walks through the rest of the process. The first screen it popped up was a warning.

No names were found in your configuration files. You should specific ServerNames in your config files in order to allow for accurate installation of your certificate. If you do use the default vhost, you may specify the name manually. Would you like to continue?

Figure 1: First screen of the letsencrypt client GUI banner.

In this case, I only use the server for WPO Stats—nothing more. This means that, yes, I use the default vhost. I selected ‘Yes’ and moved along. Where this might be different is if you were hosting multiple domain names on a single server. For example, if I ran this site on the same server, I may have virtual hosts set for both timkadlec.com and wpostats.com and would need to have that specified in my config files.

The next three prompts were straightforward. I had to enter my domain name, my email address, and then accept the terms of service. I’ve always liked easy questions.

After that, I was prompted to choose whether I wanted all requests to be HTTPS, or if I wanted to allow HTTP access as well. I had no reason to use HTTP for anything, so I selected to make everything secure.

And, well, that was it. The next GUI prompt was informing me I was all set and that I should probably test everything out on SSL Labs.

Figure 3: Final screen of the letsencrypt GUI informing me I was victorious.

I checked the site, and everything was in working order. I ran the SSL Labs test and everything came back a-ok. For once, it really was as simple as advertised.

I felt like trying my luck so I went through the process again for pathtoperf.com and, again, it went through without a hiccup. All told it took me about 10 minutes and $0 to secure both sites. Not bad at all.

Going forward

The improvement between the obnoxiously complicated process Jeremy had to suffer through and the simplified process provided by Let’s Encrypt is absolutely fantastic.

I don’t want to mislead you—there’s work to be done here. I don’t know that every server is setup to be quite as smooth as the Apache process, and without root access to the server you still have to go through some manual steps.

But that’s where they’ll need you. Try it out on your own servers and test sites and if you run into difficulties, let them know. I’m really optimistic that with enough feedback and input, Let’s Encrypt can finally make HTTPS everywhere a less painful reality.

]]>2015-11-23T10:40:00-06:00http://timkadlec.com/2015/11/holiday-web-readingI enjoy reading and one of the rules of all well-behaved reading enthusiasts—much like vegans, cross fitters and people who eat gluten free—is to never stop telling everyone we know (and even some people we don’t know) about it.

I hadn’t read very many industry books this year, but the second half of the year was absolutely bursting with great options and I couldn’t resist. Here are a list of the ones that I’ve found time to read and highly recommend.

Sometimes second editions are relatively minor updates to a prior version of a book. In the case of Adaptive Web Design, I wouldn’t have been upset if that was the case. After all, the first edition was exceptionally well written and provided as clear an explanation of progressive enhancement as you could possibly hope for.

But the second edition of Adaptive Web Design isn’t just a minor update—it’s a completely new take on the topic. While I would have been hard pressed to imagine it happening, Aaron somehow managed to write an even better guide to progressive enhancement.

You see, being told a specific way to code—a specific technique or snippet—that can have some short term value. But what’s more important is thinking about the underlying philosophy and the values that guide those decisions. While techniques come and go, those guiding principles persist. Understanding them at a deep level will help guide you as things change, helping you to make appropriate decisions about how to wield new technology as it emerges.

That’s what Aaron provides here. While there are some specific examples of how you could layer enhancements onto your site, most of the book is focused on helping you understand the underlying principles of progressive enhancement—principles that will help guide your decisions long after you’ve read about them.

I contributed an early quote about the book after I read it through which sums up my thoughts much more concisely than these last few paragraphs:

Adaptive Web Design should be one of the first books on the shelf of anyone building for the web. Showing a deep understanding of the web, Aaron manages to cram nearly 20 years of insight into a book that is an absolute pleasure to read. I dare you to try and read this book without a highlighter handy.

The book isn’t out until early December, and you should absolute pick up a copy when it’s available. I can’t recommend it highly enough.

I was looking forward to this one for quite awhile. WebPageTest is such a fantastic tool for performance testing. I’ve said it many times: the fact that a tool this powerful is free is frankly a little silly. It’s so good.

With all the power WebPageTest provides, there was a huge need for a comprehensive guide to getting the most out of it. Now we have one—a very good one at that. The book walks through how to read waterfalls (useful for any performance tool), how to test for single-points-of-failure (SPOF), how to use API’s to drive WebPageTest, how to setup a private instance and much more (including some undocumented power features).

No matter how much (or how little) you think you know about WebPageTest, you’ll walk away from this book with a few new tricks up your sleeve.

Karen is one of those people who has a really wide range of knowledge about what it takes to design and build sites. Going Responsive demonstrates this very clearly. Karen uses her plethora of experience to provide practical advice throughout the process of a responsive project—from selling a responsive project all the way through testing and measuring its impact.

Unsurprisingly, I’m particularly fond of the chapter about emphasizing performance in a responsive project. She does a great job of walking through building a case for performance and how to get started with a performance budget.

Karen’s book isn’t going to get super technical—she’s approaching the topics from a higher level which means the audience of people who would benefit from reading this is pretty broad. Going Responsive needs to be read by anyone planning to build a responsive site—designers, developers and (perhaps especially) management. As Karen points out in the introduction, successfully implementing a responsive design requires much more than design and development. It “requires a new way of solving problems and making decisions.” This book is a wonderful guide to help you make that shift.

When I grow up, I want to write as well as Ethan does. His style of writing is just so pleasant: conversational, informative and entertaining.

He also, as it turns out, knows a little bit about this whole responsive design thing. Ethan’s pulls from a ton of experience to write an extremely useful book here. Using real-world examples, he walks you through common patterns for navigation, images and video—even advertising.

He takes the time to carefully analyze each potential solution, exposing the benefits and disadvantages of each. Approaching the topic this way makes sure that you not only walk away with concrete ideas for how to navigate some of responsive design’s trickier bits, but you thoroughly understand all the potential trade-offs.

You have to love the pocket guides from Five Simple Steps. They’re hyper-focused, practical, concise (this one is 89 pages) and an absurdly good value (£3.00 which is less than $5).

Lara and Destiny give wrote a wonderful little guide to setting up a device lab that is equally good for companies of all sizes. They walk through everything you could possibly want to know—and probably more. Few people consider, for example, how to keep things charged appropriately and that topic gets its own chapter here.

When I was working with companies, before we started doing any design and development I often helped them get a solid device lab in place. Had this book been around then I would’ve been handing it out to every single one of them.

First off, a critique: while there are many touch-based puns in this book, I did not see a single pun based on Neil Diamond’s Sweet Caroline. That feels like a missed opportunity and frankly I’m a little disappointed.

Putting aside my love for Neil Diamond, the rest of Josh’s book is spot-on. I love how Josh weaves seamlessly back and forth between the why and the how: here’s why this is the case, now here’s a practical way for you to design based on that knowledge.

The book ends up being a mini-master class about designing for touch and gestures. You learn about the ideal locations for making controls easier (and harder in some cases) to get to, how to help with discoverability, how to minimize response times and how to rethink traditional input types to make things easier for people and their “clumsy sausages” (Josh’s words, not mine).

The level of knowledge here is impressive—Josh knows this stuff inside and out, and he manages to explain the topic in a way that is both concise and fun.

]]>2015-10-08T11:31:00-05:00http://timkadlec.com/2015/10/amp-and-incentivesIncentives are fascinating. Dangle the right carrot in front of people and you can subtly influence their behavior. But it has to be the right carrot. It has to matter to the people you’re trying to influence. Just as importantly, it has to influence the correct changes.

A few years ago there was a story of incentives gone wrong that was making the rounds. The story was about a fast food chain that determined customer service was an important metric that they needed to track in some way. After discussion, they determined that the time it took to complete an order in the drive thru seemed to be a reasonable proxy.

So they set a goal: all drive thru orders needed to be completed within 90 seconds of the cars arrival at the window. They had a timer visible to both the customer and the server. If the timer went over 90 seconds, the time would be recorded and then reported back to corporate headquarters.

There were some rather silly and unintended side effects. One of the most absurd happened when a customer informed the server that part of their order was missing. The server had the customer first drive forward a few feet, and then back up to the window. This way, the timer reset and it wouldn’t be flagged as a slow order in the reports.

It’s silly. But the incentives being applied encouraged this sort of….let’s call it creativity. The incentives were intended to encourage better customer service, but by choosing the wrong method of encouragement, they influenced the wrong kind of change.

Yesterday morning the Accelerated Mobile Pages (“AMP”) Project was announced to a loud chorus of tweets and posts. The AMP Project is an open source initiative to improve performance and distribution on the mobile web. That’s a very fancy way of saying that they aim to do for the web what Facebook Instant Articles does for…well…Facebook.

I’ll be completely honest: when I first started reading about it I was viewing it as basically a performance version of the Vanilla JS site. A “subset of HTML”, no JavaScript—it sounded very much like someone having a little too much fun trolling readers. It was only after seeing the publishers who were associated with the project and then looking at the GitHub repo that I realized it was a real thing.

AMP provides a framework for developers to use to build their site with good performance baked in—not entirely unlike Bootstrap or Foundation does for responsive design.

To build a “valid AMP page” you start by using a subset of HTML (carefully selected with performance in mind) called AMP HTML. You also use the AMP JavaScript library. This library is responsible for loading all external resources needed for a page. It’s also the only JavaScript allowed: author-written scripts, as well as third party scripts, are not valid.

If you want to load resources such as images, videos or analytics tracking, you use the provided web components (branded as AMP Components).

By enforcing these conditions, AMP retains tight control of the loading process. They are able to selectively load things that will appear in the initial viewport and focus heavily on ensuring AMP pages are prerender and cache friendly. In return for having this level of granular control, they give up browser optimizations like the preloader.

To further help achieve the goal of “instant load”, Google is offering to provide caching for these AMP pages through their CDN. It’s free to use and publishers retain control of their content.

The result is pretty impactful. The AMP Project is reporting some rather significant improvements for publishers using the AMP pages: anywhere from 15-85% improvement in Speed Index scores when compared to the original article.

So from a performance standpoint, the proposition is pretty clear: buy into AMP’s tools and approach to development and in return you’ll get a fast loading page without all the hassle of actually, you know, optimizing for performance.

There’s not anything particularly revolutionary about this. The Google caching is notable in that it is free, but other than that it appears to be nothing more than any CDN can do for you. You can build your sites to be prerender and cache friendly. You can limit your use of JavaScript. You can carefully select your HTML and write your CSS with the goal of performance in mind. You can do all these things all by yourself (and in fact you should be doing all of these things).

There is also nothing too exciting about the claim that using a subset of the web’s features will improve your performance. Kill JavaScript on any traditional article page out there and you’ll likely see very similar returns.

The advantage that AMP has over anyone else who might try to make similar claims is that AMP provides clear incentive by promising better methods of distribution for AMP content than non-AMP content.

The distribution model is slightly more fuzzy at the moment than the performance impact, but with a little imagination you can see the potential. The AMP Project is promising a much-needed revenue stream for publishers through soon to be added functionality for subscription models and advertising. Google, for its part, will be using AMP pages in their news and search products at the very least.

The demo is definitely impressive (provided your article uses “valid AMP HTML”). AMP pages get pulled into a nicely formatted carousel at the top of the search results and pages load instantly when tapped on. It’s exactly the kind of performance I would love to see on the web.

Google does claim they have no plans at the moment to prioritize content that is on AMP pages, but how many of us are going to be surprised to see an implementation like this go live?

AMP has giving performance a “paint-by-numbers” solution. The project has also drawn a very clear line from point A to B: do this, and here’s what we’ll do for you.

As a result they get to do an interesting thing here: they get to suggest a big, fat “reset” button and have people take them seriously.

Feels like content blockers are a two-decade reset button, sending us back to 1995 when nobody was sure how to make money publishing online.

No question that’s scary, but it’s also an opportunity. We can look at what we got wrong in the last 20 years, and try something different.

It’s kind of a unique moment. How often does an entire industry get an almost literal do-over?

AMP is experimenting with what a do-over would look like. Start fresh. Take all the baggage we’ve been adding, remove it, and then try to collectively come up with something better.

If anyone had suggested hitting “reset” a month ago, I would have found it to be an interesting thought experiment. I may have even gotten a little bit excited about the idea. So why is it that now that it’s here, I find it a bit unsettling?

I think it comes down to incentives.

If you can build a site that performs well without using AMP, then what does AMP offer us? Certainly convenience—that’s the primary offering of any framework. And if AMP stopped there, I think I’d feel a little more at ease. I actually kind of like the idea of a framework for performance.

It’s the distribution that makes AMP different. It’s the distribution that makes publishers suddenly so interested in building a highly performant version of their pages—something they’re all capable of doing otherwise. AMP’s promise of improved distribution is cutting through all the red tape that usually stands in the way.

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Getting rid of the clutter on the web and improving performance is a very worthy and important goal, as is finding ways to allow publishers on the web to have a consistent revenue stream without derailing the user experience.

But they should be decoupled. Provide tooling to improve performance. Provide a model and method for producing a revenue stream and improving distribution. You can encourage better performance by factoring that into the distribution model, but do that for performance in general—not just performance gained by using a specific set of tools.

There’s a smart team behind AMP and I do think there’s value in what they’re doing. I’m hopeful that, eventually, AMP will evolve into something that really does benefit the web as a whole—not just a specific version of it.

]]>2015-09-30T10:45:59-05:00http://timkadlec.com/2015/09/the-fallacy-of-keeping-upThe web has always evolved fairly quickly but as of late it sure feels like the pace has picked up substantially. There are a plethora of new standards and techniques emerging that range from incremental improvements to potentially giant leaps forward.

We have the mass migration to HTTPS. There’s HTTP/2 which provides the first major update to HTTP in over 15 years. Alongside of that we have Google’s QUIC which could provide significant reduction in latency. Service workers brings a programmable proxy to the browser. We have more focus than ever on motion design on the web. Improved performance metrics have shifted the discussion to more experience-based optimizations such as optimizing for the critical path. We have the shift to ECMAScript 6. The list goes on and on.

Quite worryingly, some of those words are gobbledegook to me. Looks like I have some research to do!

That sense of worry is something that seems to be widespread in our industry. Arguably the most common question I’ve heard at events over the last few years—whether directed to myself, another speaker, or simply discussed over drinks at the end of the night—is how people “keep up”. With everything coming out there is a collective feeling of falling behind.

Some have blamed it on increasing complexity but I don’t really buy that. My first few sites were simple (and ugly) things I put together using Notepad and an FTP client while teaching myself HTML using a little magazine I bought. If I were just getting started today that same setup would work just as well. In fact, it would probably be easier as the baseline of browser support has generally improved and frankly, there are a ton of excellent resources now for learning how to write HTML, CSS and JavaScript.

I didn’t think much about accessibility or performance or semantic markup or visual design when I started. I just used what little I knew and learned to build something.

Over time as I learned more and more about the web, I started to recognize the extreme limitations of my knowledge. I realized accessibility was important and that I needed to learn more about that. I learned that performance was important. I learned that typography was important.

And so I dug in and tried to learn each. The more I learned, the more I realized I didn’t know. It’s the Dunning-Kruger effect in full force.

No, I don’t think the complexity of building for the web has changed. I think our collective understanding of what it means to build well for the web has and that as that understanding has deepened, we’ve become acutely aware of how much we individually still do not know.

I certainly have improved as a developer since I first started. Yet everything I’ve learned has exposed a dozen more topics I know nothing about. The list of things I don’t know about the web grow as fast as my well-intentioned “read-it-later” list, so how do I prioritize and figure out what to explore next?

So I’ve started devoting the time I have for learning new things to learning the things that I like, that matter to me, and hopefully that will show in my work and in my writing. It may not be sexy and it may not be the hottest thing on the web right now, but it’s still relevant and important to making a great site or application. So instead of feeling overwhelmed by code, maybe take a step back, evaluate what you actually enjoy learning, and focus on that.

I completely agree with her stance on learning about what interests you, but I would add one small bit of advice to this as well: when in doubt, focus on the core. When in doubt, learn CSS over any sort of tooling around CSS. Learn JavaScript instead of React or Angular or whatever other library seems hot at the moment. Learn HTML. Learn how browsers work. Learn how connections are established over the network.

The reason for focusing on the core has nothing to do with the validity of any of those other frameworks, libraries or tools. On the contrary, focusing on the core helps you to recognize the strengths and limitations of these tools and abstractions. A developer with a solid understanding of vanilla JavaScript can shift fairly easily from React to Angular to Ember. More importantly, they are well equipped to understand if the shift should be made at all. You can’t necessarily say the same thing about an INSERT-NEW-HOT-FRAMEWORK-HERE developer.

Building your core understanding of the web and the underlying technologies that power it will help you to better understand when and how to utilize abstractions.

That’s part one of dealing with the rapid pace of the web.

The second part is letting go and recognizing that it’s ok not to be on the bleeding edge.

In another fantastic A List Apart post today, Lyza Danger Gardner looked at Service Workers and the conundrum of how you can use them today. As she points out, for all the attention they’ve received online, support is still very limited and in several cases incomplete. While I think Service Workers have a simpler migration path than many other standards—the whole API was built from the ground-up to be easy to progressively enhance—I think her nod to the hype versus the reality of support is important.

Service workers are one of those potentially seismic shifts on the web. New uncharted territory. And that brings excitement which in turn has brought a lot of posts and presentations about this new standard. For people who have seen all of this chatter but haven’t actually dove in yet, it can feel like they’re quickly falling behind.

But for all that hype, browser support is still in the early days. Building with service workers is still living on the edge—it’s pretty far from mainstream. The same is true for many of the technologies that are seeing the most chatter.

That doesn’t mean you don’t want to pay attention to them, but it does mean you don’t need to feel left behind if you haven’t yet. These are very new additions to the web and it will take time for our understanding of their potential (and their limitations) to develop.

As Dan McKinley has eloquently argued, there is a great deal of value in forgoing life on the bleeding edge and instead choosing “boring technology”—technology that may not be as “cool” but that has been around awhile. The major advantage is that the kinks have been worked out:

The nice thing about boringness…is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.

Bleeding edge technology is exciting, but there is a reason that phrase is so vivid.

If you were to ask me, “Tim, how do you keep up?” my answer would be this: I don’t. I don’t think any of us do. Anyone who tries telling you that they are keeping up with everything is either putting up a front or they’re not yet knowledgeable enough to be aware of how much they don’t know.

No matter how much time we spend working on the web, there is always some other API or tool or technique we haven’t used. There is always something we haven’t fully understood yet.

We’re blessed with a community full of people willing to share what they are learning about creating a vast knowledge base for us to tap into. We don’t need to know everything about the web. In fact, we can’t know everything about the web.

But that isn’t something to feel guilty about. That isn’t because of increasing complexity. That isn’t some sort of personal weakness.

It’s a sign of a deepening understanding of this incredible continuum we get to build and an honest acknowledgement that we still have so much left to learn.

]]>2015-07-28T09:47:59-05:00http://timkadlec.com/2015/07/understanding-proxy-browsers-architectureI did a bunch of research on proxy-browsers for a few projects I worked on. Rather than sitting on it all, I figured I’d write a series of posts sharing what I learned in case it’s helpful to anyone else. This first post looks at the general architecture of proxy browsers with a performance focus.

In the original story of the Wizard of Oz, the Emerald City isn’t actually green nor made entirely of emeralds. All of that came later. In the original story, before entering the city each person had to put on a pair of glasses. These glasses, they were told, would protect them from the bright glow of all the emeralds that would surely damage their sight. These glasses were attached and never removed. You wore them while eating, while going to the bathroom, while walking outside—you wore them everywhere and all the time.

This was all a ruse. The glow of the city wouldn’t damage anybody’s sight because there was no glow. That all came from the glasses which just happened to be tinted green. Through the lens of those glasses, everything glowed. The lens through which those people viewed their world shaped their perception of it.

I’d venture to say that most developers and designers are not big fans of proxy browsers—assuming they pay attention to them at all. They don’t behave in ways a typical browser does, which leads to frustration as we see our carefully created sites fall apart for seemingly no reason at all. And frankly, most of us don’t really need to use them on a day-to-day basis. Through the lens we view the web, proxy browsers are merely troublesome relics of a time before the idea of a “smartphone” was anything other than a pipedream.

But our view of the web is not the only view of the web. People all over the world face challenges getting online—everything from the cost of data and poor connectivity to religious and political obstacles. In these environments proxy browsers are far from troublesome; they are essential.

So while most of us building for the web have never used a proxy browser (outside of the quick spot check in Opera Mini, perhaps), they remain incredibly popular globally. Opera Mini, the most popular of all proxy browsers, boasts more than 250 million users. UC, another popular proxy browser, boasts 100 million daily active users and is the most popular mobile browser in India, China and Indonesia.

These browsers perform optimizations and transcoding that can provide significant improvements. Several proxy browsers claim up to 90% data savings when compared to a typical browser. That’s the difference between a 2MB site and a 200kb site—nothing to sneeze at.

To understand how they accomplish this—and why they behave the way they do—we first need to revisit what we know about how browsers work.

Typical Browser Architecture

A typical modern browser goes through a series of steps to go from the URL you enter in your address bar to the page you ultimately see on your screen. It must:

Resolve the DNS

Establish TCP connection(s) to the server(s)

Request all the resources on a page

Construct a DOM and CSSOM

Build a render tree

Perform layout

Decode images

Paint to the screen

That’s a very simplified list and some of them can happen in parallel, but it’s a good enough representation for the purpose of highlighting how proxy browser architecture differs.

We can break these steps out into two general buckets. Steps 1-3 are all network constrained. How quickly they happen, and the cost, depends mostly on the characteristics of the network: the bandwidth, latency, cost of data, etc.

Steps 4-8 are device constrained. How quickly these steps happen depends primarily on the characteristics of the device and browser: the processor, memory, etc.

Proxy browsers intercede on behalf of the user in an attempt to reduce the impact of one, or both, of these buckets. You can broadly classify them into two categories: browsers with proxy services, and remote browsers.

Browsers with proxy services

The first category of proxy browsers are really just your plain-old, everyday browser that happens to offer a proxy service. These browsers alter the typical browser behavior only slightly, and as a result they provide the least benefit for end users as well as—usually—the least noticeable impact on the display and behavior of a web site. (While not really tied to a browser, look at Google’s search transcoding service for an example of how substantially a proxy service could alter the display of a page.)

Instead of requests being routed directly from the client to the web server, they are first routed through some intermediary layer of servers (Google’s servers, UC’s servers, Opera’s servers, etc). This intermediary layer provides the proxy service. It routes the request to the web server on behalf of the client. Upon receipt of the request, it sees if there are any optimizations it can provide (such as minification, image compression, etc) before passing back the potentially altered response to the client.

The browser-specific behavior (steps 4-8) remains the same as the typical browsers you’re used to testing on. All of the optimizations that take place focus primarily on the reducing the impact on the network (1-3).

There are many examples but at the moment of writing some of the more popular options in this category are Google’s Data Compression tool (Flywheel), UC Web’s Cloud Boost, and Opera Turbo.

Remote browsers

Remote browsers push the limits a bit more. They aggressively optimize as much as possible providing a much larger benefit for the end user, but also a lot more trouble for developers. (If that bothers you try to remember that the proxy browsers exist because users need them, not because developers do.) These are the browsers you more typically think of when hearing the term “proxy browser”. With the increase in browsers offering proxy services, I think referring to these as remote browsers can be a helpful way of distinguishing them.

Unlike their more conservative brethren, remote browsers are not content to merely make a few optimizations on the network side of things. They’ve got more ambitious goals.

When a website is requested through a remote browser, the request is routed through an intermediary server first before being forwarded on to the web server. Sounds familiar right? But here’s where remote browsers start to break away from the traditional browser model.

As that request returns to the server, instead of the intermediary server routing it back to the client, it proceeds to request all the subsequent resources needed to display the page as well. It then performs all parsing, rendering, layout and paint on the intermediary server. Finally, when all of that is taken care of, it sends back some sort of snapshot of that page to the client. This snapshot does not consist of HTML, CSS and JavaScript—it’s a proprietary format determined by whatever the browser happens to be.

That’s why calling them “remote browsers” makes so much sense. The browser as we know it is really contained on the server. The application on the phone or tablet is nothing more than a thin-client that is capable of serving up some proprietary format. It just so happens that when it serves that format up, it looks like a web page.

The most important thing to remember for remote browsers is that because all they are doing is displaying a snapshot of a page, anything that might change the display of that page requires a trip back to the server so an updated snapshot can be generated. We’ll discuss that in more detail in a later post as the implications are huge and the source of most proxy browser induced headaches.

There are many options, but Opera Mini, UC Mini and Puffin are some of the more popular.

What’s up next?

Understanding the basic architecture of proxy browsers makes testing on them so much easier and far more predictable. It’s the key to understanding all of the atypical behavior that causes so many developers to cringe whenever they have to fire up a proxy browser for testing.

With the foundation laid, we can spend the next several posts digging deeper into the specific optimizations the two categories of proxy browsers make as well as consider the implications for developers.

]]>2015-06-29T13:45:41-05:00http://timkadlec.com/2015/06/thriving-in-unpredictabilityGetting a website successfully delivered to a visitor depends on a series of actions. My server must spit something out. That something must be passed over some network. That something must then be consumed by another something: some client (often a browser) on some device. Finally, the visitor views that something in whatever context they happen to be in.

There are a lot of unpredictable layers here.

I have no control over the network. It could be fast, it could be slow, it could be down entirely.

I have no control over the end device. It could be a phone, a laptop, an e-reader, a watch, a tv. It could be top-of-the line or it could be budget device with low specs. It could be a device released the other day, or a device released 5 years ago.

I have no control over the client running on that device. It could be the latest and greatest of modern browsers. It could be one of those browsers we developers love to hate. It could be a proxy browser. It could be an in-app browser.

I have no control over the visitor or their context. They could be sitting down. They could be taking a train somewhere. They could be multitasking while walking down the street. They could be driving (I know). They could be color-blind.

The only thing I control is my server environment. That’s it. Everything else is completely unpredictable.

So when I’m building something, and I want to make it robust—to make it resilient and give it the best chance it has to reach across this complicated mess full of unpredictability—I want to take advantage of the one thing I control by letting my server output something usable and as close to working as possible. That doesn’t mean it’s going to have the same fidelity as the ideal experience, but it does mean that provided there’s a network at least there’s an experience to be had.

From there I want to do whatever I can to provide offline support so that after that first visit I can reduce some of the risk the network introduces.

I want to apply my JavaScript and CSS with care so that the site will still work and look as good as possible, no matter how capable their browser or device.

I want to use semantic markup to give clients as much information as possible so that they can ensure the content is usable and accessible.

I want to build something that’s lightweight and fast so that my content gets to the visitor quickly and doesn’t cost them a fortune in the process.

I want to ensure that content is not hidden from the visitor so that they can get what they came for no matter their context.

Of course there’s some nuance here in the details, and assumptions will naturally be made at some point. But I want to make as few of those assumptions as possible. Because every assumption I make introduces fragility. Every assumption introduces another way that my site can break.

We used to call that progressive enhancement but I know that’s become a bit of loaded term with many. Discussions online, and more recently at EdgeConf have confirmed this.

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

We’re not building something to work without JavaScript. That’s a short-sighted definition of the term. As both Paul Irish and Kyle Simpson pointed out during EdgeConf, it puts the focus on the features and the technology. It’s not about that.

It’s about the users. It’s about finding ways to make our content available to them no matter how unpredictable the path that lies between us and them.

]]>2015-05-14T11:54:08-05:00http://timkadlec.com/2015/05/choosing-performanceFacebook just announced a new feature they’re calling “Instant Articles”. Facebook is positioning this as a way for publishers to have their stories displayed, within Facebook, “instantly”:

Mobile web articles can take an average of eight seconds to load, by far one of the slowest parts of the Facebook app. Instant Articles provides a faster and richer reading experience for people in News Feed.

Now before we wring our hands too much over this, it’s worth noting that the articles themselves still start on the web. Facebook just becomes a distribution platform. Here’s the exact statement from their FAQ’s (emphasis my own):

Instant Articles is simply a faster, mobile-optimized way to publish and distribute stories on Facebook, and it supports automated content syndication using standards like HTML and RSS. Content published as Instant Articles will also be published on the publishers’ websites.

From Facebook’s perspective this is a no-brainer. It keeps the content within Facebook’s environment, which is one less reason for Facebook’s users to ever leave the app or site. In addition, we have numerous case studies showing that improved performance improves engagement. So Facebook creating a way to display content—very quickly and within their own little garden—makes absolute sense for them.

What I find interesting is the emphasis on speed. There are a few interesting interactive features, but speed is the selling point here. Facebook is pushing it very, very hard. “Fast” is scattered throughout their information about Instant Articles, and emphasized very heavily in the promotional video.

I’m all for fast as a feature. It makes absolute sense. What concerns me, and I think many others based on reactions I’ve seen, is the fact that Facebook very clearly sees the web as too slow and feels that circumventing it is the best route forward.

Here’s the thing: they’re not entirely wrong. The web is too slow. The median SpeedIndex of the top 1000 websites (as tested on mobile devices) is now 8220 according to HTTP Archive data from the end of April. That’s an embarrassingly far cry from the golden standard of 1000.

And that’s happening in spite of all the improvements we’ve seen in the last few years. Better tooling. Better browsers. Better standards. Better awareness (at least from a cursory glance based on conference lineups and blog posts). Sure, all of those areas have plenty of room for improvement, but it’s entirely possible to build a site that performs well today.

So why is this a problem? Is the web just inherently slow and destined to never be able to compete with the performance offered by a native platform? (Spoiler: No. No it is not.)

Another recent example of someone circumventing the web for performance reasons I think gives us a clue. Flipkart, a very large e-commerce company operating in India, recently jettisoned their website (on mobile devices) entirely in favor of Android and iOS apps and is planning to do the same with their desktop site. Among the reasons cited for the decision, the supposedly better performance offered by native platforms was again a primary factor:

Our app is designed to work relatively well even in low bandwidth conditions compared to the m-site.

Had I been in that interview my follow-up question would’ve been: “Well then, why don’t you design your website to work well even in low bandwidth conditions?” Alas, I was not invited.

But this quote is really the best indicator of why the web is so slow at the moment. It’s not because of any sort of technical limitations. No, if a website is slow it’s because performance was not prioritized. It’s because when push came to shove, time and resources were spent on other features of a site and not on making sure that site loads quickly.

This goes back to what many have been stating as of late: performance is a cultural problem.

While this is frustrating, this is also why I’m optimistic. The awareness of performance as not merely a technical issue but a cultural one, has been spreading. If things are progressing a little slower than I would like, it’s also fair to point out out that cultural change is a much more difficult and time consuming process that technical change. The progress may be hard to see, but I believe it is there.

We need this progress. Circumventing the web is not a viable solution for most companies—it’s merely punting on the problem. The web continues to be the medium with the highest capacity for reach—it’s the medium that can get into all the little nooks and crannies of the world better than any other.

That’s important. It’s important for business, and it’s important for the people who need it to access content online. It’s unfair, and frankly a bit naive and narcissistic, to expect anyone who wants to read your articles or buy from your company to A) be using a specific sort of device and then B) go and download an app onto that device to accomplish their goal. The reach and openness of the web well worth preserving.

So yeah, I think any criticism of the web’s terrible performance is totally valid. We can choose to do better, but our focus is elsewhere.

Scott’s right: performance is a decision. We actively choose whether to devote our time and energy to improving it, or to ignore it and leave it up to chance.

Let’s choose wisely.

]]>2015-04-29T13:10:00-05:00http://timkadlec.com/2015/04/joining-akamaiOn May 11th, I’ll be joining Akamai. I would be lying if I said it was an easy decision. I waffled a lot (For the sports enthusiasts out there, it’s not entirely unlike Favre and retirement. For the rest of you, insert some clever Waffle House pun here.). The past few years of working for myself have been amazing! I’ve gotten to work on some great projects with some great people and have had a ton of fun doing it.

But if you’ve followed along you know that I am extremely passionate about improving performance on the web. Getting a chance push for better performance from within a company that handles 20% of the web’s traffic and is full of people who are after the same goal was too good an opportunity to pass up.

It’s a big change, but an exciting one. Akamai constantly talks about “building a faster, stronger web”. Sometimes a company has a snappy line that they use, but there is little evidence that they believe in it. That’s certainly not the case here. They’ve been very active in investing in better education, tooling and standards for the web (and recent moves like hiring smart folks like Yoav Weiss to actively work on web standards only further cements that commitment). When they say they want a stronger web, they actually mean it.

The role is a new one within their young (only one year old!) developer relations group. At the moment, it’s pretty undefined other than the goals of helping people make the web faster and helping Akamai figure out the best ways to enable people to do that. While the specifics will be defined over time, here’s what I do know:

The role involves a lot of me doing the things I already like to do. I’m going to be doing a lot of research and experiments around performance and finding ways to improve it. I’m going to learn a lot and share what I learn. The main difference is I’m going to have more time to do it now.

I’m almost certainly going to start being more vocal and active in the standardization process. There are a lot of interesting challenges ahead and we will need improved standards to help us overcome them.

I will not be marketing. Akamai doesn’t want me to do it. I don’t want to do it. On my list of “presentations I don’t like to watch”, product pitches sit right at the bottom just barely above “presentations that involve me getting hit repeatedly in the face.” The stuff I write and talk about is going to be very much like the stuff I’m writing and talking about right now.

I’ll still be working from my headquarters here in beautiful and frequently cold northern Wisconsin. Akamai has done a lot of work to make working remotely as seamless an experience as possible. There’s a lot of Slack in my future.

I will be working with some incredibly talented and friendly people. The dev rel team is small (its only other members are Kirsten Hunter, Darius Kazemi and fearless leader Michael Aglietti), but so very smart and so very talented! Beyond that, I’ve gotten to know many folks at Akamai over the years—some of whom I am lucky enough to call friends. There are a ton of incredibly smart and passionate people there. If you subscribe to the adage that “if you’re the smartest person in the room, you’re in the wrong room”, then Akamai is definitely the right room.

It’s going to be a lot of fun!

We’ve made a lot of progress pushing performance in the past few years, but we’ve got some serious challenges ahead of us as well. Some of them are cultural, some of them are educational, and some of them will require improved tooling and standards.

I’m super excited to get to tackle those challenges head-on!

]]>2015-03-11T14:59:00-05:00http://timkadlec.com/2015/03/what-your-site-costsAs our understanding of performance on the web improves, we are starting to shift from the traditional metrics we’ve focused on. Things like load time and page weight are rightfully being given less focus as we move to more mature metrics like SpeedIndex that provide a better understanding of perceived performance.

But that doesn’t mean we can dismiss page weight altogether. The web is not free. Data has a cost and that cost varies around the world. We’ve always sort of guessed that sites could be a little expensive in some areas, but other than a few helpful people tweeting how much certain sites cost while roaming, there wasn’t much in the way of hard data. So, I built What Does My Site Cost?.

The ITU has data about the cost of mobile data in various countires and World Bank provides some great information about the economic situation around the world. Pairing the two together, we can get an idea of how much things might cost—and what that means in relation to the overall economy in those countries. I’m not particularly good with economics, but thankfully for me Victoria Ryan is and she was willing to help me work through the details to make sure the numbers actually mean something.

For starters, the site is going to report three metrics.

Cost in USDWhat is the approximate cost to the user of loading that page around the world (based on information about the cost of 500MB of data).

Cost in USD, PPPWhat is the approximate cost to the user of loading that page around the world (based on information about the cost of 500MB of data), with Purchasing Power Parity factored in. This gives a better representation of relative costs based on the differences in values of currency.

Cost as a % of GNI, PPPUsing the PPP cost already calculated, this metric compares that value to the daily Gross National Income per capita to factor in affordability.

Running Tests

Thanks to the always helpful Pat Meenan, the site is powered by everyone’s favorite performance testing tool: WebPageTest.org. You can choose to run the test directly from What Does My Site Cost?. If you do, WebPageTest will run the test using Chrome mobile over a 3G network and you’ll be able to jump to those results once the test has completed.

Figure 1: Site cost indicators are now available directly in WebPageTest results.

But what really has me excited is the integration directly into WebPageTest. If you use WebPageTest to analyze your site, you’ll see a new “Cost” column in your test results giving you an indicator of how (relatively) expensive your site is. Following the link there will bring you back to What Does My Site Cost for a deeper dive. In other words, you don’t have to go out of your way to find out how much a site might cost—the information will be seamlessly presented to you whenever you test a page.

What’s Next?

For starters, I want to get more countries in there. I’m working on that. I also hope to add in information about roaming costs (almost scared to see how bad those numbers will be) but I have to track down more reliable data there first. That’s a little trickier (so it seems), but I’m sure it can be found somewhere.

As I mentioned before, I’m not very good with economics so if one of you out there are and have recommendations for additional metrics to show definitely let me know.

]]>2015-02-24T15:10:00-06:00http://timkadlec.com/2015/02/apples-webThe Pointer Events specification just became a W3C Recommendation. For those unfamiliar, it’s an intriguing attempt to unify pointer events regardless of the input device in use.

…we love Pointer Events because they support all of the common input devices today – mouse, pen/stylus, and fingers – but they’re also designed in such a way that future devices can easily be added, and existing code will automatically support the new device.

Unfortunately, as they went on to point out, there are some hurdles to jump yet. While Microsoft has a full implementation in IE11 and Mozilla is working on it, Apple has shown no interest and Google seems ready to follow their lead.

I was willing to give the Blink folks the benefit of the doubt, because I do remember they had specific and legitimate concerns about the spec awhile back. But after reading through notes from a Pointer Events Meeting in August, I’m forced to reconsider. The Chrome representative had this to say:

No argument that PE is more elegant. If we had a path to universal input that all supported, we would be great with that, but not all browsers will support PE. If we had Apple on board with PE, we’d still be on board too.

Doesn’t sound very good, does it?

Let’s set any opinions about Pointer Events aside. Frankly, I need to do a lot more digging here before I have any sort of strong opinion in one direction or another. There is a bigger issue here. We have a recurring situation where all vendors (save for Apple) show interest in standard, but because Apple does not express that same interest, the standard gets waylaid.

The jQuery team took a very strong stance against this behavior:

We need to stop letting Apple stifle the work of browser vendors and standards bodies. Too many times, we’ve seen browser vendors with the best intentions fall victim to Apple’s reluctance to work with standards bodies and WebKit’s dominance on mobile devices. We cannot let this continue to happen.

As you might expect, the reactions have been divided. While many have echoed those sentiments, some have rightfully pointed out that Apple and Safari have made some really great contributions to the modern Web.

Of course they have. So has Mozilla. So has Microsoft. There have actually been quite a few organizations who can make that very broad and generic claim. They all can also claim the opposite.

But here’s the current reality, one that has been accurate for awhile. Apple has a very, very strong influence over what standards get adopted and what standards do not. Partly it’s market share, partly it’s developer bias (see, for example, how other vendors eventually felt forced to start supporting the webkit prefix due to vendor prefix abuse).

Apple simply does not play well with other vendors when it comes to standardization. The same sort of things we once criticized Microsoft for doing long ago, we give Apple a pass on today. They’re very content to play in their own little sandbox all too often.

They also don’t play particularly well with developers. They supposedly have a developer relations team, but it’s kind of like Bigfoot: maybe it’s out there somewhere but boy there hasn’t been a lot of compelling evidence. This splendid rant from Remy Sharp and the follow-up from Jeremy Keith come to mind. They were written in 2012, but the posts would be equally on point if published today.

The other vendors aren’t exactly perfect either. The Microsoft folks, no doubt reeling from all the negativity aimed at them over the years, have more than once been content to let everyone else duke it out over a standard, only getting involved late when a consensus has been reached. The Blink folks, despite being the best positioned to take a stand, have been happy to play the “Apple won’t do it so I guess we won’t either” card on multiple occasions.

But at least you can have a dialogue with them. It’s easy to reach the Mozilla, Google and Microsoft folks to discuss their thoughts on these emerging standards. That’s a much harder thing to do with the Apple crew.

So I’m tempted to agree with jQuery’s stance about Apple stifling the work of vendors and standards bodies. They haven’t exactly done anything to make me feel like they’re particularly interested in the idea of the “open” web.

But I don’t think other vendors get to be let off the hook. I’m just as happy to point my fingers at them for being so easily persuaded by an argument that amounts to “we don’t want to”. I’m not comfortable with a single entity being able to hold that much influence when so many others have expressed interest in an idea.

This isn’t a healthy thing for the web. We need something to change here. And I’m optimistic. To quote Jeremy’s 2012 post:

It can change. It should change. And the time for that change is now.

]]>2015-02-19T09:27:00-06:00http://timkadlec.com/2015/02/access-optionalI remember going as a kid with my parents when they would pick out a new car. My parents didn’t want to spend a ton so we usually looked for something basic that would work.

The car, of course, had to have certain features. A way to steer. Brakes. An engine. Doors. These were things all cars had and all cars had to have if anyone was going to ever consider purchasing them.

From there you decided on the bells and whistles. Did you want power windows and power locks? Did you want a built-in CD player or would a cassette player and radio work just as well? Did you want a sunroof?

We often did without most of those add-ons. They were the extras. They were what drove the cost of a car higher and higher. They were nice to have, but a car would work without these things.

Then we say that we’ll get to accessibility later. We’ll make it faster later. We’ll worry about those less-capable devices later. And that’s in the best of cases. More often those “features” are not acknowledged at all. If it’s not a priority at the beginning of a project, why would we expect it to be a priority later?

Yes, there’s a cost associated with building things well (there’s also a cost of notbuildingthingswell). Building something that is stable and robust always costs more than building something that is brittle and fragile.

The problem is not that there is a cost involved in building something that works well in different contexts than our own. The problem is that we’re treating that as an option instead of a given part of what it means to build for the web.

How did access get to be optional?

]]>2015-02-06T13:50:00-06:00http://timkadlec.com/2015/02/client-side-templatings-major-bugOver the past year I conducted performance audits on a handful of sites that all used client-side MVC’s, typically Angular but not always. Each site had their own optimizations that needed to take place to improve performance. Yet a pattern emerged: client-side MVC’s were the major bottleneck for each. It slowed down the initial rendering of the page (particularly on mobile) and it limited our ability to optimize the critical path.

So I get a great deal of happiness from reading posts from much smarter folks than I who are rallying against this all-to-common mistake.

I think it is wasteful to have a framework parse the DOM and figure out which bits of default data to put where while it is also initialising itself and its data bindings.

and:

Populating an HTML page with default data is a server-side job because there is no reason to do it on the client, and every reason for not doing it on the client.

I’ve said it before: if your client-side MVC framework does not support server-side rendering, that is a bug. It cripples performance.

It also limits reach and reduces stability. When you rely on client-side templating you create a single point of failure, something so commonly accepted as a bad idea that we’ve all been taught to avoid them even in our day-to-day lives.

“Don’t put all your eggs in one basket.”

It’s pretty good advice in general, and it’s excellent advise when you’re wading through an environment as unpredictable as the web with it’s broad spectrum of browsers, user settings, devices and connectivity.

This might sound like I’m against these tools altogether. I’m not. I love the idea of a RESTful API serving up content that gets consumed by a JavaScript based templating system. I love the performance benefits that can be gained for subsequent page loads. It’s a smart stack of technology. But if that stack doesn’t also consist of a middle layer that generates the data—in full and on the server—for the first page load, then it’s incomplete.

This isn’t idealism. Not only have I seen this on the sites I’ve been involved with, but companies like Twitter, AirBnB, Wal-Mart and Trulia have all espoused the benefits of server-side rendering. In at least the case of the latter three, they’ve found that they don’t have to necessarily give up those JS-based templating systems that everyone loves. Instead, they’re able to take advantage of what Nicholas Zakas coined “the new web front-end” by introducing a layer of Node.js into their stack and sharing their templates between Node and the client.

This is where it gets interesting and where we can see the real benefits: when we stop with the stubborn insistence that everything has to be done entirely on the client-side and start to take advantage of the strengths of each of the layers of the web stack. Right now most of the progress in this area is coming from everyday developers who are addressing this issue for their own sites. Ember is aggressively pursuing this with FastBoot and making exciting progress. React.js emphasizes this as well. But most of the other popular tools haven’t made a ton of progress here.

I sincerely hope that this starts to change, sooner rather than later. Despite what is commonly stated, this isn’t a “web app” (whatever that is) vs “website” issue.

It’s a performance issue.

It’s a stability issue.

It’s a reach issue.

It’s a “building responsibly for the web” issue.

]]>2015-01-01T09:50:00-06:00http://timkadlec.com/2015/01/what-i-read-in-2014Time for my annual look back at what I read in the past year. Keeping in the same format as last year, each book has a rating (on a simple 5-star scale) as well as a very short review to give you (and me when I look back at this in a year or so) some idea of why I enjoyed each book.

My top three choices for fiction are: The Martian, Ancillary Justice and Genesis. For non-fiction: Chuck Amuck, Stuff Matters and The Noble Approach. For web-specific titles: Responsible Responsive Design, Designing for Performance and The Manual (I’m just going to cheat and say read all the issues).

I saw a tweet the other day from Austin Kleon where he shared that he had read 70 books this past year. He also shared a brief “How to read more” list. I only hit 39 books this year so I’m not as qualified as he is to provide advice on this, but my advice would be very similar. In particular tip #4 is important:

If you aren’t enjoying a book or learning from it, stop reading it immediately. (Flinging it across the room helps give closure.)

I mention this each year, but if I’m not enjoying a book on some level I don’t finish it. That’s why I have yet to give a book a review of less than three stars out of five. I don’t want to review books I haven’t finished and I don’t want to finish books I’m not enjoying. I currently have nearly 300 books on my “to-read” list according to Goodreads. There’s no time to waste on books that aren’t interesting to me.

Fantastic sequel in what is shaping up to be a very, very fun series to read. It’s definitely darker and grittier than its predecessor, but there’s still plenty of the same sort of snarky commentary taking place between the main characters. Thoroughly enjoyed it and eagerly awaiting book three!

Genesis is the very best sort of science fiction. It manages to explore topics such as defining consciousness, the nature of the soul and what it means to be human without ever once getting bogged down by these discussions. It grips you from the very start, and when you think you know where everything is headed it takes a sharp turn. Absolutely loved it!

I’m not a big fan of the business fable/parable thing but this was a gift from a friend so I decided to give it a read. As far as business fables go this is a decent one. But, as can be expected from this type of book, it’s very light on meat and lofty on ideals and straw men.

The Upside of Irrationality is an interesting (and surprisingly intimate) continuation of the discussion started in Predictably Irrational. Ariely’s style of writing and storytelling moves the book along at a brisk pace. I think a few of the conclusions he came to probably would’ve benefited from a few additional experiments to verify them, but for the most part they’re well thought out. Worth a read.

Crux picked up where Nexus left off, letting us see the impact of a post-human technology being used by the masses. It’s a stronger novel than the first book. Though I enjoyed Nexus, it could get a little preachy at times—pushing the underlying ideas a little too heavily. Crux seems more mature. It still explores some really interesting concepts, but it feels better integrated into the story this time.

I also really appreciate how the ideas in the book are backed by current technological advancements. In both this and Nexus, Naam follows the last chapter up with a section describing how similar technologies are being used in real-life today.

The Flight of the Silvers has a little bit of everything—time travel, parallel universes and X-Men style rules of nature. At first I wondered if it would all be a little too scattered, but Price weaves it all together to create a super fast-paced book that was incredibly difficult to put down and lots of fun to read. Looking forward to the rest of the series and hoping they follow soon: lots of questions left to answer.

This is the first book on the topic that I’ve read that I felt did a good job of presenting accessibility not as a list of bulletpoints to check off, but as a way of thinking about how you build your site. Whenever anyone is looking to get started in accessibility, this is where I’m going to point them.

The book hits the ground running on the first page, but as a results it took me a while to care about the characters in Locke Lamora. Once I did (probably about a quarter or so through the book), I enjoyed the story quite a bit. Good anti-hero novel.

Perry walks you through a bunch of first-hand accounts of his experiences in areas where the impacts of globalization has been anything but encouraging. It’s not incredibly in depth, and a few of the stories seem a little more loosely tied to globalization than others, but altogether an interesting look at the “other” side of globalization.

Moss takes a look at the big three (salt, sugar and fat) not through a scientific lens, but a business one. Based on numerous meetings and interviews, Moss dissects the food industry’s reliance on them—from their impact on taste to how manufacturers market them in a way that can often confuse even the smartest of shoppers.

It’s a really interesting read, but unfortunately it can be a little repetitive. Some of the chapters seemed to retell parts of the same story told in other chapters, as well as reintroduce people we were already introduced too. It doesn’t completely detract from the points the author is making, but it is occurs frequently enough to make the book feel more disjointed than it should have.

The Humans is about an alien who comes to earth to kill a few humans who know too much, learns to love humanity, and so on. There are certainly things to like: there are indeed a few thought provoking sentences as well as a good amount of humorous insights (like the aliens perception of magazines, for example). Overall though, it was just a little too heavy-handed. Some books can explore the topic of humanity in a way where ti sort of reveals itself throughout the story—this isn’t one of those books. The plot is thinly constructed and exists pretty much entirely to let the author share his thoughts on the topic. It’s not a bad book—just not that great either.

In a similar vein to Replay (one of my all-time favorite books), The First Fifteen Lives of Harry August revolves around a character who lives his life over and over again. The world is ending, sooner than it used to, and it’s up to him to figure out why. While it does tend to linger on a few details longer than necessary, overall it’s a smart and well-written book that is equal parts drama, thriller and science-fiction.

Fantastic! Watching Charlie’s mental progression, and subsequent regression, was both fascinating and heartbreaking. But what really puts the book over the top for me is Keyes' focus on the emotional baggage that comes along with a sudden burst in intelligence: the bad memories, the sudden realization that folks are not as nice as they had seemed, and the struggle that comes with trying to find a way to match his new found mental maturity with his still stunted emotional maturity. Definitely a book that keeps you thinking long after the final sentence.

Fantastic! The Martian is a realistic, thrilling and often humorous story of one man’s attempt at survival on Mars. Gripping from the very start of the book through to the very last sentence. Can’t recommend this book highly enough. Read it.

I really enjoyed Brilliance, so when the sequel came out I grabbed it right away and had pretty high hopes. Sakey did not disappoint.

When I wrote my review about the first book, I said he touched on some social topics but didn’t really explore them in much depth. A Better World starts to flesh that out a bit more by adding more dimension to the characters and more meaning to the over-arching plot. The result is a book that is a bit more though-provoking than the first, and just as fun and fast-paced.

I was looking for a book to help me brush up on some of the things I had forgotten from college and high school, as well as give me a little better understanding of what to pay attention to when it comes to the financial health of my company. Financial Intelligence fits the bill very nicely. It’s a pretty nice refresher for those who learned some of this stuff in the past and gentle enough for people completely new to the concepts as well. Good starting point.

Well this was a surprisingly enjoyable read! I’m not a huge fan of the whole horror genre. Movies, books—there’s precious few of either that I’ve enjoyed. This one definitely breaks the mold. It feels fresh and has significantly more depth to it. The relationship between the main characters is fascinating, as is the way those relationships alter—and even seem to come full circle in some cases—by the end of the book. Thoroughly enjoyed.

Daily Rituals provides overviews of 150+ people’s day—what they did to be productive, to relax, when they worked, when the rested, etc. Each profile is short and stands alone, so it’s an easy pick it up/set it down read. Some of the profiles are more detailed and interesting than others, but what I enjoyed most was seeing the patterns emerge (for example, you can almost do a 50/50 split of people who claimed long walks and exercise were the key to their success, versus people who turned to some sort of drugs or medicine to keep themselves going).

The Stars My Destination (or Tiger, Tiger) seems to come up all the time in the discussion of great sci-fi classics. Having finally read it, it’s fairly easy to see why and to see the influence the book has had on cyberpunk and sci-fi in general. While it never quite reached the same level of quality as Bester’s The Demolished Man, that’s more a testament to how good that book is than it is a detriment to this one. The plot moves forward at a blistering pace and despite the fact that the main character is very unlikeable, you still can’t pry yourself away from finding out what happens next.

You can always trust PPK’s writing to be extremely well-researched and thorough. There is no shortage of books about the mobile web but he managed to find plenty of new and interesting tidbits regardless. I especially enjoyed the chapters on the mobile market and browsers.

So, so good. The book starts with a picture of the author and each chapter explores the “stuff” in that picture: glass, concrete, dark chocolate, etc. He discusses how the stuff gets made, what it’s good for, and it’s evolution over time. In some hands, this could be dry stuff but the author is incredibly passionate about materials and it’s contagious. You feel his enthusiasm throughout each chapter and can’t help but start looking at the everyday materials around you in a new light. One of my favorite reads of the year!

Designing for Performance is the book to hand to anyone—designer or developer—who wants to get started making faster sites. Lara carefully and clearly explains not just how you can create better performing sites, but how you can champion performance within your organization ensuring it remains a priority long after launch. Consider this the starting point in your web performance journey.

Chuck Amuck is more memoir than autobiography, which makes it all the more fascinating. Chuck talks about the very intense process of cartoon animation, the team that was in place at WB (along with some fairly harsh assessments of “management”) and how iconic characters like Bugs Bunny, Wile E. Coyote, and Daffy Duck evolved and developed their own personalities over time. As a bonus, the book sprinkles sketches and storyboards of the Looney Tunes animations throughout.

Fantastic primer on web typography. Loads of useful information and advice all very clearly explained. If you’ve been interested in typography but have had a hard time making sense of it all, this is the ideal place to start.

Finally got around to purchasing the first three issues of the Manual and I’m wondering what took me so long. Issue 1 was fantastic. A combination of great writing and careful editing resulted in a really enjoyable book with every section providing food for thought. I particularly enjoyed the sections from Simon Collison, Dan Rubin, and Frank Chimero. I also was really impressed by the quality of the book itself: looks great and lovely attention to detail.

A blend of mystery and science-fiction (leaning more heavily towards mystery), Scalzi’s latest is a good one. Some of the issues discussed in the book are fairly thinly veiled allusions to current situations but they never feel forced in anyway (as happens when an author pushes too hard). Instead, the story moves quickly with plenty of tension, humor and thought-provoking dialogue along the way.

Proving that issue 1 wasn’t a fluke, the second installment is just as excellent. Really tough to choose, but I’d say the sections from Karen McGrane, Cennydd Bowles and Trent Walton were probably my favorites.

A wonderful blend of biographical details and animation design principles that leaves you with a whole new appreciation for cartoon design. After reading the book, watching the cartoons becomes an even more enjoyable experience as you realize just how beautifully crafted they are. Really enjoyed this one!

I purchased Ancillary Justice almost immediately after talking to a friend earlier this year and hearing her rave about it, but it had remained untouched in my pile of books to eventually since then. I like sci-fi, but I’m not a huge space opera kinda guy so I hesitated. I shouldn’t have.

Ancillary Justice is a great book—well deserving of the awards it won. It’s smart, beautifully written and gripped me from early on. Ann does an incredible job of building tension throughout without a ton of superfluous battles and forced action. A tight plot and smart dialogue is all she needs to put you on the edge of your seat and keep you there until the final page.

A worthy follow-up to Ancillary Justice that only slightly falls short of matching Ancillary Justice’s excellence. Still thoroughly enjoyed it, but it was paced a bit slower and only picked things up about 2/3 of the way through. The same smart, high-quality writing is there. It just felt a little more like a setup for the final book in the trilogy (which should be very eventful).

While you’ll find more detailed information elsewhere, the author does a pretty good job of providing some context and historical insight into the evolution of the Looney Tunes, the creation of the major characters, and the personalities behind the scenes. Where this book really shines, though, is in the many beautiful and hard-to-find sketches and animation artwork prominently on display. It’s a gorgeous book!

Barrier’s book is an extremely well researched and well written look at American cartoon animation from the 30’s to 50’s. While, understandably, Disney gets the most attention, he does discuss the work produced by places such as Warner Bros, Terrytoons, Hanna-Barbera and UPA. That’s really where the book flourishes. Seeing how ideas and techniques spread from studio to studio and being able to compare and contrast their different approaches is fascinating.

Barrier doesn’t pull punches. Nobody in this book is free from criticism: their miscues are highlighted just as much as their successes. In fact, he’s quite critical of all the studio’s and their work. While I don’t necessarily agree with a few of his critiques of some of the cartoons (his opinion on the impact of Noble & Jones combined work couldn’t be farther from my own), it’s interesting to hear his thoughts on them nonetheless.

After a 15 year absence Keizer returns to teaching for one year and, thanks to this book, we get to follow along. The result is an insightful look at modern day teaching that is both humorous at times, and depressing at others.

There you have it. If you have any recommendations for what I should add to my stack of books to read in 2015, feel free to let me know!