Earlier this week, we made public the new homepage for MPR News. This is the final big piece of our ongoing responsive re-design of the site. Technology-wise, there aren’t any new systems or components on the homepage that haven’t been put in use in the topic pages or story pages. But, the homepage is a very visible and important design change.

Old and tired.

The biggest problem we were trying to solve is that our old page didn’t work well on a mobile device. Today, about 40% of our total traffic comes from mobile devices. That’s a lot, and to remain relevant to that growing percentage, we need to not be a bad experience, and maybe even a good one.

The last redesign of MPR News was done in 2008, before responsive websites were really a thing, and mobile websites were only just starting to pop up. In addition to not being mobile-friendly, there were numerous other substantial problems with our old homepage: The type was too small and without hierarchy. There were too many topical sections that all looked alike. Some testing showed that few visitors (under 25%) scrolled past the “blog box”. And there were so many different links and elements on the page that it was too much to practically take in and decipher.

To design the new homepage, we formed a small group of invested parties, the core group of which was Digital News Director Jon Gordon, Product Director Peter Rasmussen, and myself. We started by making a list of the things that we wanted to be on the new homepage. Designing a page to work well on a mobile device means you need to focus on the things that are relevant to someone with a limited screen size. We settled on the following things, which neatly explain our final design:

News stories that editors can adjust in order and prominence

NewsCut, updraft, and the weather forecast are important and well loved by our audiences

Today’s Question needed to make an appearance when relevant, as decided by editors

We do news related events, and those needed to show up, and not as ads

Links to the major sections of the site for more focused news

Most viewed is very popular, and we wanted that to stand out more

We do excellent photos and video, and wanted that to stay omnipresent, but not huge

The radio schedule should be present, since we are, after all, a radio service

More links to find us in other places: social media, our apps, podcasts, and email

Audio everywhere, because we create great audio

Much like our section fronts, we settled on a three column layout. Unlike the section fronts, the persistent column moves depending on screen size. On desktops & larger screens, it is on the left, on tablets and medium screens, it moves to the right. We debated this, but ultimately liked it a lot on tablets because it puts the latest news furthest to the left and felt that was most appropriate. On phones, this all shifts to one column, and puts the news stories first.

When we display the news stories, we default to reverse chronological of our latest stories, but editors can and do override that to put the more important and noteworthy stories at the top of the heap. This listing of stories integrates content from our internal CMS, itasca, our blogs, and the PMP, through our internal search normalizer, The Barn. In addition to ordering, there are five different levels of prominence a story can be given. They are:

Level 0: Just the headline

Level 1: Headline slightly larger, thumbnail image, and a short description. This is a “described story”

Level 2: Headline larger yet, larger image, and the short description. This is a “promoted story”

Level 3: Much bigger headline, short description, no image, goes across both columns on tablet and desktop screens. We probably won’t use this very often. This is a “blowout story”

Level 4: Just like the blowout story, but with an even bigger headline. Think “Dewey Defeats Truman”. This a “super blowout”.

In addition to these levels, editors can turn on or off the date stamps and add labels, e.g. “BREAKING NEWS”, above headlines.

We’ve also fully switched to using Franklin Gothic Demi Condensed as our headline typeface, and use Franklin Gothic Medium in some places as well. As any newspaper designer knows, using a condensed font allows more characters to fit into a line, a consideration that is particularly important on smaller phone-sized screens. The MPR News logotype is Akzidenz Grotesk, but Akzidenz is not easy to license as a webfont. Franklin is easier to license and is a close relative of Akzidenz, so it suits our needs. This change to Franklin Gothic now propagates to all the pages on the site, including the stories, topics, and section fronts.

One element I particularly like is the new schedule. We are a radio station, and the schedule serves a very utilitarian and necessary function of informing the audience when shows are going to be on. It was surprisingly difficult to find on our old site. With the new homepage, the schedule will move to the top of the page on the weekends, when the news slows down somewhat and the programs are different from the week. It is a carousel, which is somewhat taboo for mobile, but slick works fairly well for our limited and text-based implementation.

We still have some work left to do on the homepage and mprnews.org: Our show pages aren’t fully migrated to the new layout; Our media player & playlist system needs to be re-worked to use websockets; There is an election coming up… The list goes on and a website is never truly finished (well, maybe). But, we are in a better place for more of our visitors than we were a year ago when we started this project.

We know everyone won’t agree with all the choices that we’ve made, and we know we’re not perfect. Please feel free to share your thoughts on our design here in the comments, or use the feedback forum we’ve set up.

Origins

The Barn is a the central internal search engine and content aggregator within MPR|APM. Here’s how it came to be.

A few years ago I went through a period of reading and re-readingCharlotte’s Web to my kids. I loved the metaphor of the barn as a place for everything and everything in its place.

Around the same time I had grown dissatisfied with the state of search within the company. There was no single place where I could go and find everything that the company had ever produced. Google knew more about our content than we did. That seemed wrong to me.

I also knew that we would soon be faced with a big project called the Public Media Platform, which would involve standardizing the metadata and structure of our content for publishing to the central PMP repository. That meant I needed to learn about all the different CMS systems at work within the company, a non-trivial task since we have at least these:

My own module, essential for massaging things like character encodings, XML/HTML/JSON transformations, and the like.

Throughout the day, cron jobs keep the aggregated content in sync with the various origins and marshals it all into XML documents on local disk. An indexer sweeps through, finds any new XML documents and incrementally updates the Dezi indexes.

I create a single index for each origin+content-type, so there’s a separate index for Itasca-Features and Itasca-Audio and Itasca-Images. Maintaining separate indexes makes it much easier to create distinct endpoints within Dezi for limiting search to particular content subsets, whether by origin or type. It also helps with scaling, by sharding the search across multiple filesystems and machines.

Creative re-purposing

Once the Barn system was up and humming along its automated path, we started to see other uses for it besides just search. Since all our content is now normalized into a standard format and metadata schema, we can create ad hoc collections of content across multiple origins. Since the Barn knows how to talk to the origin backend databases, in real-time, we can de-normalize and cache assets (like MPR News stories) that do not change very often but which can be expensive (in terms of SQL queries) to generate. And since we now have all our content in one place, we can re-distribute it wherever we want, automatically.

Here, for example, is an example command for exporting Barn content to the PMP:

% perl bin/barn-export -e PMP 'origin:marketplace date:today'

That pushes all of today’s Marketplace content to the PMP. It runs on a cron job via the Barn’s scheduler system. An export is just a search query, so we could also do something like:

% perl bin/barn-export -e PMP 'sausage politics origin:feature'

That pushes every story mentioning the keywords ‘sausage’ and ‘politics’ to the PMP. Pretty handy.

Future

The Barn has proven very helpful to our internal infrastructure and content delivery. That improves our audience experience in some direct ways (faster page load times, automating some tasks our content creators used to do manually). We’d also like to open up a subset of the Barn’s search functionality to our audiences as well, so that they can search across all our content at once, and preview audio and images inline within results, just like our reporters and editors can do today.

]]>http://blogs.mpr.org/developer/2014/08/the-barn/feed/0Audio Searchhttp://blogs.mpr.org/developer/2014/07/audio-search/
http://blogs.mpr.org/developer/2014/07/audio-search/#commentsWed, 16 Jul 2014 17:37:58 +0000http://blogs.mpr.org/developer/?p=210Every day APM|MPR generates several hours of audio content for its radio and digital broadcasts. Over time that adds up to many terabytes of audio, most of which has no written transcripts available, because transcripts are expensive and slow to create. Imagine listening to the news and writing down everything you hear, word for word, then going back to error-check and verify the spelling of names and places. Now imagine doing that for hours every day. It’s tedious work.

Yet unless a transcript exists, there is really no way to search the audio later. Basic metadata, like key words, speaker names, title, date and time, might be available, but it won’t begin to represent the detail of a conversation in a radio interview.

In the last decade, speech-to-text technology has evolved to the point where we can start to imagine computers transcribing our news broadcasts and radio shows well enough to make them searchable. APM|MPR wanted to find out if the open source speech-to-text (also known as automatic speech recognition (ASR)) toolkits were mature enough to use for searching our audio archives.

We had modest goals. We knew that even commercial products, like Google Voice or Siri or even your company’s voicemail system, could vary widely in the quality and accuracy of their transcriptions. Previous, failed attempts to automate transcriptions at APM|MPR had a much loftier goal: we wanted to publish the results for our audiences. We decided for this prototype to scale back our ambitions and instead focus on keyword detection. Most search engines focus on nouns as being the most useful words. So we wanted to answer one question: could we develop an ASR system that could identify the most frequently used nouns or noun phrases in a piece of audio? The Knight Foundation agreed to help us fund a project to answer that question.

We partnered with some industry experts at Cantab Research, Ltd. who agreed to build the basic ASR tools for us, based on their extensive work with the open source software we wanted to evaluate. Cantab is led by Dr. Tony Robinson, a leader in the ASR field. Cantab would build the various acoustic and language training models required, as well as write the scripts for manipulating the ASR libraries. APM|MPR would build the testing scripts and processing infrastructure, including a web application for viewing and comparing transcripts.

Based on consultation with Cantab, we chose to focus on a comparison between open source two ASR libraries: Julius and Kaldi. We identified about a hundred hours of audio for which we had manually-generated transcripts, and supplied the audio and text files to Cantab. Unfortunately many of the transcripts were not accurate enough, because they had been “cleaned up” for audience presentation, but Cantab was able to identify additional public domain audio and transcripts to flesh out the training collection and push on with the work.

Over the course of three months Cantab delivered five different iterations of the models and code. Each version got progressively faster and more accurate. Three of the iterations used Julius and two of them used Kaldi. In that way we were able to compare the two ASR libraries against one another using the same collection of testing material. In the end we were able to get comparable results with both libraries.

While Cantab was training the ASR models, we built a web application where users could register audio by URL and trigger a transcription for later delivery via email. The application was designed to process the queue of incoming audio using a variable number of machines so that it could scale linearly. The more machines we point at the queue, the faster the application can process the audio.

Each time Cantab delivered a new version of the ASR components, we re-ran our evaluation against our testing collection, using the web application we had developed. The testing collection was composed of the same 100 hours of audio and transcripts we had sent to Cantab originally. Our testing procedure looked like:

Generate a machine transcript automatically

Apply a part-of-speech tagger and extract the nouns and noun phrases, sorted by frequency, for both machine and human transcripts

Compare the machine and human word lists

What we found was that the testing scripts consistently found 85-100% of the same key words in both the machine and human transcripts, as long as frequency was ignored. If frequency was weighed, the overlap dropped to 50-70%. What that told us was that the machine transcripts were accurate enough, most of the time, to surface the key words, even if they couldn’t be relied upon to identify those words every time they appeared. That feels “good enough” to us to pursue this route, since frequency in full-text search is typically used only to affect rankings, not inclusion, within a result set.

Processing audio for ASR, even to test a single configuration setting, can be very time-consuming and resource-intensive, so we knew we had an aggressive schedule (six months) and budget for this project. Still, our experience prototyping this project taught us several things, among them:

Garbage in, garbage out. The accuracy of the ASR application is completely dependent on both the quality and quantity of training material we can provide. We would like to identify a much larger corpus of APM audio to use for improving our training models.

Who said that? Identifying specific speakers (called “diarization”), such as the reporter versus the interviewee, could help us improve our search results, by allowing audiences to limit their searches to specific speakers.

Cross your Ts. For search purposes we ignored capitalization, punctuation and sentence structure. If we spent some time maturing our language models and related scripts, we might be able to better improve key word identification, particularly around proper nouns like people’s names.

A little song, a little dance. Identifying sounds that are not human beings speaking, such as music or other sound effects, could use a lot more work.

We really enjoyed working on this project. APM|MPR would like to thank Cantab Research, particularly Dr Robinson and Niranjani Prasad, who helped elucidate the mysteries of ASR systems, and the Knight Foundation and Knight Prototype Fund, whose financial support and encouragement made the project possible.

If you’re a die-hard fan of MPR News, you may have noticed some new page layouts on our site recently. We have been working hard on our search and grouping tools that allow us to generate these pages. We call these groups collections: pages that list and link to other pieces of content, almost always news stories and/or audio segments.

Collections aren’t generally pages that get much traffic or attention from audiences or search engines. But, they can occasionally serve a few useful purposes. First, for a small (and I mean small) subset of our visitors, they are highly utilitarian pages that allow browsing and refining by topic, where search doesn’t work well. Secondly, collection pages are useful for grouping particular highly focused stories together, at times when there is a lot of coverage happening for a relatively short period of time. For example, coverage of a Franken/Coleman recount or 35W Bridge Collapse.

The other important consideration for us is that our homepage is essentially a collection page, but the collection it “searches” is every story we create. What we’re building for our low traffic collection pages will be hugely important for building our new homepage.

Old collection pages: Bad

On our old site, we have had very little standardization of our visual design, and this was most evident on our collection, project, and episode pages. Each collection and project page was a special little flower, crafted and cared for during it’s special little moment, then left to wither and die. Here are a few examples:

Collection pages on our old site were not conceived in a time when smartphones, tablets, and retina screens were a thing. But today they are, and our pages obviously need to deal with that. Most of the old pages pages feature too many or too small images, type that can only be read with a magnifying glass, and little standardization between collections. Not only is this a web developer maintenance nightmare and not responsive to device capabilities, it is confusing and scattershot for our audiences.

New collection pages: Less bad

Our new collections have a range of visual and utility options that can be turned on and off by editors, depending on the needs of the story. Some collections can just be a basic list of stories, others need an introduction paragraph or two, a sidebar with links to evergreen interactive tools, and/or a listing of people who worked on a project. During the design process, I called them basic, fancy, fanciest, and visual. Here’s a few of the various flavors now in use on our live site:

We severely constrained the design options that can be changed on these pages by non-developers. The only visual flourish that can be manipulated by an editor is the title for a collection, such as on The Daily Circuit collection page. We’re still working on the actual implementation for the fanciest and major topic/section pages, but the visual design is solidified. You can preview our high fidelity mockups in InVision.

A small amount of nerdy details

The configuration for these pages is read from a JSON config file that lets editors change text and turn on/off options. Right now, these JSON files are hard-coded, but we’re working on a general-purpose configuration management tool that can be used by all our websites for just this type of situation.

For the listings of stories on these pages, it is a little more complex. We have an internal tool we call “The Barn”, built for the PMP, based on Dezi, that indexes and normalizes content from our various CMSes in real-time (every animal/CMS is welcome in the barn, even the stinky & bad tasting ones). We have another tool, we call Meeker, that allows us to save queries and some metadata about the queries to The Barn. Both The Barn and Meeker spit out JSON, which makes them effectively language/platform agnostic for our web apps. The mprnews.org web app is essentially the VC in MVC, relying on RESTy JSON models on other servers for the data.

My geekier colleagues will hopefully be sharing more details about how Meeker, The Barn, and our other REST API systems work and allow us to evolve our CMSes.

]]>http://blogs.mpr.org/developer/2014/06/new-collection-pages-for-mpr-news/feed/0New weather pages for MPR Newshttp://blogs.mpr.org/developer/2014/03/new-weather-pages-for-mpr-news/
http://blogs.mpr.org/developer/2014/03/new-weather-pages-for-mpr-news/#commentsMon, 10 Mar 2014 18:11:08 +0000http://blogs.mpr.org/developer/?p=158Today we have launched our new weather pages for MPR News, sporting a new design, improved weather data, and geolocation. If your browser supports it, we will attempt to give you the most accurate forecast for the location where you are.

Our old weather pages were very text heavy. We’ve re-vamped that with more relevant visualizations of the upcoming weather. For the next 48 hours, we show the sky conditions, and a quick text description of each day’s forecast. We also show a handy line graph of the temperature swing, highlighting the highs and lows, and when they’ll happen.

Our new icons are also new, and support retina devices. Looking at other weather pages, flat icons like Meteocons appear to be all the rage. I don’t think these icons communicate terribly well the range of weather conditions or the differences between night and day, especially at smaller sizes. Our new icons are an evolution of icons I previously created and these icons by Tobias Wiedenmann. For something as vibrant as weather, color, depth, and texture were tools we didn’t want to abandon.

7 day forecast shows a heat wave.

For the longer term 7-day forecast, show the temperature range for the day, and if your device’s screen can fit, when the high/low temps for that day are going to happen. We also show the average high/low temps for the day, if available for your location (more on this below).

We also include a link and blurb on the latest forecast from Updraft. Despite the great data, it’s important to have a skilled meteorologist interpret the data and help us peek around the corner for signs of hope and/or gloom.

Oh noes! A blizzard in Grand Forks!

When severe weather is happening, we also display prominent alerts from the NWS. If we’re running a live blog, we’ll also have prominent links to that to get up to the minute storm coverage.

With this new page, we have replaced Weather Underground with Forecast.io. Our meteorologists generally prefer the National Weather Service data for forecasts, but wunderground hasn’t tracked NWS data quite as closely as we’d like over the years. Forecast.io tracks the NWS LAMP data very closely in the US, which makes our meteorologists happy. Forecast.io also has an excellent API which makes developers happy, and reasonable pay-as-you-go pricing, which makes the bean counters happy.

Forecast.io data source tracking.

There are two things with forecast.io that we have to work around or augment: First, the API response doesn’t include the trend for atmospheric pressure: rising, falling, or steady. To work around this, we make a second API call asking for the conditions 3 hours ago, then compare the barometer readings. Any changes let us know the pressure trend. Weather nerds know the atmospheric pressure is important to understand coming weather patterns.

Secondly, forecast.io doesn’t provide the average high and low temps for a given location. We have retrieved the 30 year ‘normals‘ from the Climactic Data Center and built a little system to retrieve it from a handful of csv files. NOAA makes this data available for the entire country as a series of 30mb CSV files or via a very slow REST API, but we opted to just grab the handful of MN observation stations. We’ve hard-coded the coordinates of these stations (they don’t move) and then do some quick calculations to see if your weather location is near enough to our known locations. If it is, we show you normals and how they compare to your location.

I have personally been using these weather pages for the past month and find them both useful and complicit in my discontent with our polar vortex fueled misery. Depending on the forecast (and how you’re feeling), the average highs and lows are inspiring or damning. The trend lines really help you know when might be the best time to take the dog for a walk.

Any feedback or issues are always welcome.

]]>http://blogs.mpr.org/developer/2014/03/new-weather-pages-for-mpr-news/feed/3Makefiles for Web Projectshttp://blogs.mpr.org/developer/2014/02/makefile/
http://blogs.mpr.org/developer/2014/02/makefile/#commentsThu, 06 Feb 2014 17:41:53 +0000http://blogs.mpr.org/developer/?p=135It seems everyone and their mom has a slicknewbuild system designed to streamline the web development process.

These tools are great, but what about those small projects that don’t necessitate extra dependencies?

]]>http://blogs.mpr.org/developer/2014/02/makefile/feed/0Dynamically loading assets with pjax and require.jshttp://blogs.mpr.org/developer/2014/01/dynamically-loading-assets-with-pjax-and-require-js/
http://blogs.mpr.org/developer/2014/01/dynamically-loading-assets-with-pjax-and-require-js/#commentsWed, 22 Jan 2014 20:11:20 +0000http://blogs.mpr.org/developer/?p=116With the new story pages for MPR News we’re using a combination of techniques to make the site load very fast.

43 http requests, 780KB, 2.6s load, notbad.gif

One of those techniques to make our site fast been to combine the CSS and javascript for the site into a single file. This keeps our HTTP request count low, which keeps pages fast. This works fine for just having one or just a few types of pages, but what happens when we have more than story pages, and have pages that require wildly different css and javascript? E.g., if a visitor comes to the home page, but never sees a story page, they don’t need the assets for a story page loaded. Our initial build system for the site would bundle all that up, meaning our hypothetical visitor would get that CSS and javascript they didn’t need.

We’ve solved this by changing our build system to build different files for each major section of the site, plus base files that are shared between all sections. So, visiting a story page, you’ll receive this:

base.min.css (7.6k gzipped), <link> in html

story.min.css (2.8k gzipped), <link> in html

init.js (71k gzipped) <script src=…> in html

story.js (1.6k gzipped) loaded dynamically via require.js

This is twice as many files and HTTP requests as had been previously necessary, which isn’t great, but the tradeoff is worth while considering how slow loading all sections of a site could be. We haven’t built all sections yet, so we don’t know precisely, but we can imagine that it’d suck. Even if the file size stayed reasonable, the proliferation of event bindings and unused CSS selectors would weigh heavily on page performance.

Enter PJAX

Our setup is complicated somewhat by our use of PJAX (pushState + ajax). Since we don’t do a full page reload, something needs to load up the new CSS and javascript for new sections as a visitor navigates through our site. Enter router.js:

Router.js executes at initial pageload to load any javascript dependencies for the present page path (line 8). On future PJAX events, it looks at the new route and tries to load CSS and JS dependencies for the new route. We’re using pjax:start (line 15) for CSS loading so that we can get the CSS before new markup is injected and avoid any FOBUC. We wait until PJAX is done (pjax:complete, line 9) because new javascript probably has selectors that need to run against markup that needs to be present. We also care less about the JS firing instantaneously. I wouldn’t claim that router.js is robust or sophisticated, but it works for us so far.

We’re using the very helpful require-css plugin for require.js to dynamically load CSS assets after initial pageload. On initial pageload, we define the CSS for both the base and the given route in the HTML. Again, this avoids FOBUC and waiting on javascript.

Building with grunt

We use grunt as our front-end build thingamajig. Here are the relevant parts of our Gruntfile.js:

Grunt-contrib-requirejs is a very direct mapping for the require.js optomizer (r.js), which is indispensable. R.js analyzes our require.js setup, traces the dependencies, and builds discrete minified .js modules that map to our routes in router.js. Every time we add a new route that has new js dependencies, we need to update our Gruntfile.js and router.js.

A word of warning: the paths, dirs, and baseDirs setup in require.js / r.js can be tricky and took me some trial and error to get right for the build setup that we wanted.

Assets & expire headers

We’ve also improved our asset versioning. In development mode, our site loads javascript out of the /js/ path. In production, after we’ve run grunt deploy, javascript is served out of /js-built/{site version}/, where grunt-contrib-require builds it to. Our other css, image, and font assets are served out of /a/{site version}/ in production mode. Previously, we had been versioning individual files, but we are now versioning our asset folders directly. This means we can set very long expires headers on all our assets, yet when we make a change to the site, the assets will update reliably.

Here’s is the relevant mod_rewrite chunk of our apache config:

Since we use a CDN, Akamai, we want our CDN to update frequently from our origin servers, but to advertise far-future expires to all clients. The way you do this with Akamai is to set the expires for your CDN as your normal expires header, and then serve an Akamai specific header that tells their network what to rewrite your expires to. Here’s the config:

This afternoon we pushed a change to mprnews.org that degrades support for Internet Explorer 8. This is how it feels to not have to support IE8 as a first-class browser:

IE8 does not support CSS3 media queries, which are pretty much a requirement for a mobile-first, responsive website. We had previously supported IE8 as a first-class browser, but continuing to support it as we build out more features of MPR News was becoming increasingly difficult.

Our site is still fully functional and all content is still present in IE8. We no longer generate backwards compatible media query free CSS for IE8, so IE only sees the CSS not wrapped in a media query. This means IE8 loads what is essentially the mobile breakpoint of our site.

]]>http://blogs.mpr.org/developer/2014/01/degrading-support-for-ie8/feed/1New story pages for MPRNews.orghttp://blogs.mpr.org/developer/2013/11/new-story-pages-for-mprnews-org/
http://blogs.mpr.org/developer/2013/11/new-story-pages-for-mprnews-org/#commentsFri, 01 Nov 2013 17:45:36 +0000http://blogs.mpr.org/developer/?p=96For the past several months, a team of us in St. Paul have been hard at work crafting a new and more modern reading and listening experience for the MPR News website. The story pages that we’re unveiling today are the next step in our process to modernize our site to keep it up to date with our audiences media consumption habits. These new story pages will work on all devices, load faster, be easier to read, and present our journalism better.

Right now, we’re only making our story pages available. Our existing homepage and ancillary pages will retain their same look and feel for now. To get to a new story page, visit an old story page and follow the link to the beta version on the right. These new pages will live alongside our existing pages for the next few weeks (in Beta!) as we continue to fix bugs and listen to the feedback from our audiences. You can help us by sharing your gripes, likes and ideas in our feedback forum.

Since this is a developer blog, I shall now delve into the details of what we’re doing.

New domain

Just like we moved The Current from minnesota.publicradio.org to thecurrent.org, so too are we moving MPR News to mprnews.org as a canonical domain name instead of just an alias. One reason is that we want the URL to match our brand. Moving to a separate domain also allows us to have more sophisticated URL routing and CDN settings that are harder to manage when shared on the same domain as other sites.

Responsive design + mobile first

Like many sites, including our own Marketplace, The Current, and MPR News Blogs, our pages are going responsive. They’ll work on all your devices (or at least devices made within the last decade). For The Current and the MPR News blogs, we had two breakpoints because we used Foundation. We found that at smaller sizes of our desktop/large breakpoint that pages felt cramped. For our new story pages, we have three major breakpoints: Small/phone (0-540px), Medium (541px-748px), and Large (749px+).

Our CSS has been structured with Mobile First in mind. Mobile browsers only have to deal with the simplest and smallest parts of the CSS, while tablets (medium) and desktop (large) have more rules to parse, but in theory have more CPU horsepower to work with.

We support IE8, but IE8 does not read media queries. There are several ways to work around this. We use LESS, so we can do some pre-processing on our CSS and generate a IE8 compatibility stylesheet. IE8 stops parsing as soon as it sees a media query, so if we were to load our CSS without doing anything, IE8 would render a mobile page. This isn’t terrible, but could be better. Instead, we generate a ie8_compat.css file for IE8 that has our medium, large, and other rules, all without media queries. IE8 only loads this, and it makes IE8 render the desktop site. Here’s a sample gist of how we’ve structured the base.less file.

Performance + pjax

Part of being a good responsive site is that performance, how long a page takes to load up, needs to get better, not worse. Going between pages needs to be faster, not slower. Many re-designed responsive sites actually perform worse than their desktop only counterparts. We’re attempting to avoid this pitfall with laser-like focus. We’re doing a lot of the standard things like reducing http requests, using icon fonts, setting long expires, etc, but we’re going the extra step with two other techniques.

Pjax is short for push-state asynchronous javascript and xml. To put it more succinctly, when someone clicks a link, we only reload parts of the page that need to change, but not the whole page. The main benefit is that don’t have to download assets or repaint the entire screen. Using pjax, pages really snap into view almost immediately, rather than having a couple second delay. Pjax, unlike ajax or other techniques, means that the URL changes as the portion of the page change.

A nice non-performance benefit of pjax is that our new audio player can stay persistent and playing in the same window, even as a visitor navigates to other pages or stories.

We’ve tried to minimize the amount of javascript on our pages, because in a responsive world doing fancy things with javascript gets complicated to mange. Javascript is also not free: it incurs loading, parsing and execution time that is particularly pronounced on slower mobile devices. One technique to mitigate the performance issues is to only load up the bits of script you need to use right before you need to use it. We’re using require.js for this. Here’s an example on our site: Many of our stories have audio and an audio player, but most people read the story and don’t listen to the audio. We don’t want to load the audio player scripts up for everyone. We only want to load them up, on demand, for people that request audio playback. Require.js helps us do that. Require also integrates nicely with our build system, grunt, to generate tidy and linted code. It’s swell.

Responsive Images

There is no official, ratified specification for responsive images. This makes it really hard to do well. On our blogs, we used a <picture> polyfill, and it works fairly well. However, it depends on javascript to actually run, and a not insignificant amount of javascript at that. For our new story pages, we’re testing a technique some have dubbed clowncar. It takes the ability to have media queries in SVG images and combines it with raster images. For IE8 and older android users, there is a fallback of a standard <img> tag hidden with conditional comments.

We have a a few issues with aspect ratios not scaling properly, but overall we find it to work well. To our knowledge, this technique hasn’t been used in a larger website yet, so we have some hesitation in being the guinea pig, but think the benefits are useful.

]]>http://blogs.mpr.org/developer/2013/11/new-story-pages-for-mprnews-org/feed/0Peers Conference 2013 in Chicagohttp://blogs.mpr.org/developer/2013/07/peers-conference-2013-in-chicago/
http://blogs.mpr.org/developer/2013/07/peers-conference-2013-in-chicago/#commentsTue, 02 Jul 2013 15:43:59 +0000http://blogs.mpr.org/developer/?p=83When something is executed in a way that feels simple and effortless, you know a lot of thought and effort went behind it.

That’s how I felt about Peers Conf, a perfectly sized event with just right the number of attendees. I was able to meet almost everyone there, and even chat one-on-one with the presenters… a rarity for most conferences I’ve ever attended.

The event’s topic matter was that of PHP frameworks with a mix of business discussion and philosophy. It was coordinated by Jessica D’Amico.

Laravel

Laravel was on a lot of people’s minds, as the framework’s creator Taylor Otwell was on site to demonstrate new features in its latest release.

Taylor Otwell talks Laravel 4

I have been a long-time productive user of CodeIgniter myself, but can’t hold back my excitement for Laravel. It strikes a nice balance with excellent documentation and feature set, but avoiding the feeling of “kitchen sink” abundance. It brings inspiration from Rails and other frameworks, including migrations, an ORM (Eloquent), authentication, sessions, routing, message queuing, template engine (Blade), and many other tools.

Taylor highlighted the fact that documentation and great community support are what make frameworks popular. Laravel will be no exception in that regard.

Craft

Peers included several presentations about Pixel & Tonic’s new commercial CMS, which just reached 1.0 status (base version is free).

Take a look under the hood, and you’ll find an extremely well-executed set of tools for building a site with dynamic content. On the front-end, templates are rendered with the Twig template engine. Add-ons are sensibly structured and the process for plugin development is well-documented.

High Traffic Expression Engine Sites

Anna Brown’s lessons learned while working with a high traffic news site based on Expression Engine.

Anna Brown talks about high traffic Expression Engine sites

Being a WordPress developer, I found many takeaways from Anna’s talk that could translate to any CMS, not simply Expression Engine:

Many things about your site will become increasingly difficult as database size increases.

Caching can not be simply bolted on to your CMS, and no singular caching solution will solve your problems. An HTTP proxy or accelerator like Varnish can be helpful when configured with a lot of memory.

Make performance changes one at a time and measure their impact with tools like those provided by New Relic.

Communicate clearly with your devops folks.

It’s hard being a team of one. Form a team of people with a variety of skill sets who can help.

EE+Git+Capistrano

Having automated many of my own development tasks with Make, I can see a ton of smart thinking behind Viget’s workflow and deployment process with Capistrano. By now, I think everyone realizes Capistrano isn’t just for Rails apps anymore.

My best takeaway for versioning databases was their sync-down workflow. Databases are always a pain point when it comes to version control, and the sync-down process makes a great deal of sense when you migrations are not an option.

Framework Agnostisity with Composer

Phil Sturgeon, the developer of PyroCMS has touched many frameworks and libraries PHP communities have come to know and love.

Phil Sturgeon talks Framework Agnostisity with Composer

Talking points:

PEAR is old.

CodeIgniter took the helm against everything that PEAR stands for

There are myriad of frameworks available now, and many standards have emerged to solve the problem of package management

Composer has evolved to provide dependency management for almost any type of project

PyroCMS is deprecating CodeIgniter in favor of Laravel in version 3.0

The complexities of deprecating a framework could easily merit several more talks

I enjoy the way he pronounces “HTTP”

Everyone I know has used one of Phil’s libraries by now, so throw him a few bucks as he raises money for the Braking AIDS ride this fall.

Closing Talks

The conference wrapped up by waxing philisophical…

Overlooking Millennium Park from Chicago Cultural Center.

Finding Balance

Angie Herrera’s talk spoke to me. After many years of totally overdoing it myself, Angie reminded me that the balance of work and happiness takes time and reflection, and that the importance of surrounding yourself with family and loved ones can not be understated.

Take time to support your own health, get sleep, and get outdoors.

The Dao [sic] Of Low

“What is work? Why do we work? How do we keep working? In this contemplative talk, Low will look at these questions (and more), while trying to answer them with the help of ancient and modern philosopher’s theories, and a healthy dose of critical thinking.”

Roundtable Discussions & Peer Review

The one-on-one peer review discussions were a great part of this conference. I had the opportunity to talk business and work with Allan Branch of Less Everything.

And many more…

I spent my time on the development track and missed a few of the business sessions. Hopefully somebody else does a wrap-up on those.