Drupal is an enormously welcoming community with countless online forums and community events to learn about the platform. Its open-source knowledge sharing and peer review is arguably second-to-none, and thanks to Acquia's Drupal certifications the Drupal learning process is becoming more consolidated.

However, nothing can quite beat the quality, focus, and hard work that goes into publishing a book. We’ve quizzed our Drupal developers and members of the Manchester tech community to find out which books every Drupal developer must read.

Drupal-Specific Books

This book is a great welcome to Drupal 8, introducing you to the architecture of D8 and its subsystems, for a thorough foundational understanding. It guides you through creating your first module and continues to build upon your skills with more functionalities that all modern developers need to know.

Drupal 8 Development Cookbook - Second Edition: Harness the power of Drupal 8 with this recipe-based practical guide

By Matt Glaman

Using a fun, easy-to-follow recipe format, this book gives you an expansive look at the basic concepts of Drupal 8, in a step-by-step fashion. While this sometimes misses out on the ‘why’ of an action, it makes building new modules approachable and unintimidating for any level of developer.

Ambitious, intelligent, and a great Drupal developer? Check out our careers page.

Drupal 8 Explained: Your Step-by-Step Guide to Drupal 8

by Stephen Burge

Written in no-nonsense plain English, this book is great for any Drupal beginner. It’s been praised as the day-to-day reference book that any new developer should keep handy on their desk.

This Drupal 8 guide will take you through 7 real Drupal 8 sites to demonstrate the latest practices in action. This all-encompassing view provides a look at the reasoning and methodology behind certain practices, and context for their larger impact on the site.

The Definitive Guide to Drupal 7 (Definitive Guide Apress)

by Benjamin Melancon

While some new sites are building built in Drupal 8, it’s important the remember that many of the sites you’ll work on and maintain are in Drupal 7. This comprehensive book provides the nuts and bolts of any Drupal 7 site, to build a powerful and extensible system. Some concepts are slightly dated, so we’d recommend cross-checking online occasionally.

Pro Drupal 7 Development (Expert's Voice in Open Source)

by Todd Tomlinson

This book is for slightly more ambitious developers as it quickly jumps through the basic modules to the more complex. Breaking down the development of APIs and improvements to Drupal 7, this book will have any Drupal Developer producing complex modules in no time.

Essential Development Books

Many coding principles span development languages and frameworks. Here are our essentials for any developer seeking the ability to produce high quality, clean code.

The Clean Coder: A Code of Conduct for Professional Programmers

By Robert C. Martin

This book delves into the difference between good and bad code. Split into 3 parts, the book first discusses principles, patterns, and practices of writing clean code. Real world case studies follow, before the book finishes with the signs of bad code and problem solving skills needed to de-bug and refresh any code base.

CSS Secrets: Better Solutions to Everyday Web Design Problems

by Lea Verou

CSS Secrets explains how to code common 'real world' solutions with CSS. Condensing the most useful and practical examples, this book is a more exciting read than the often extensive paperbacks which try to cover absolutely everything.

Begin expanding your language stack with this multifaceted book. PHP is essential for any Drupal developer as it forms the core language of the Drupal framework. Meanwhile, learning Javascript, CSS and HTML5 will empower you to deliver more complex solutions.

And lastly, an open source book about… open source

The Cathedral & the Bazaar

By Eric S. Raymond

While this piece is slightly dated, it’s underlying concepts are still highly relevant and give great insight into the origins and essence of open source. It’s also free, so definitely still worth having a skim through… if you can handle the formatting!

I hope you've found some great reads here, if you've got a personal favourite please let us know below. Also if you're interested in advancing your Drupal career, please check out our careers page to see if there's a position perfect for you.

...And the story of how two unlikely organisations drove innovation in the Drupal Platform.

While delivering continual support for the Avanti Gas Drupal website, we wanted to implement a Postcode Lookup feature, with auto-complete functionality. Existing solutions were either too basic or needlessly expensive, but we couldn’t justify building a new solution from scratch. As luck would have, I discovered at our weekly Drupal 8 meeting that my colleagues working on The Wildlife Trusts’ sites required autocomplete capabilities for the address lookup on the trust’s donations page.

These two very different organisations, Avanti Gas and The Wildlife Trusts, shared a mutual requirement. Recognising the value of a Postcode Lookup across different sectors and services warranted the investment in developing and contributing the module to the open source community.

For The Wildlife Trusts, the team had used the PostCodeAnywhere PHP API (now Loqate) by writing their own JavaScript to complete the autocomplete functionality. But this was proving more and more time consuming; initially, it wasn’t even clear that the service offered a JavaScript widget.

The solution that the team built for The Wildlife Trusts was quite custom and specific, utilising the Address module’s more rigid form fields. So the end result wasn’t something we could quickly or easily contribute back to the community.

Clearly this was a feature that would benefit the Drupal experience if there was a permanent publicly available fix. When discussing how different organisations may use this feature, I came to the conclusion that it needed to be available as part of the widely adopted webform module that provided the popular drag-and-drop interface.

The Solution

The Postcode Lookup is fully responsive, has autocomplete capabilities, functions smoothly on an enterprise level, and has a comprehensive database to pull from for global addresses. All of this is available as a stand-alone widget, or as part of the existing webform module.

The Drupal 8 Loquate module in action

How it works

A user starts to type a postcode and the field will suggest addresses based on the text entered. Upon choosing an address from the drop-down list, a whole set of address fields are populated. This was achieved using Loqate’s (then PostCodeAnywhere) paid “Address Capture” JavaScript API, which was really quick and easy to implement.

To add Postcode Lookup to the Webform, we created the Loqate module. All you need is the Webform module and a Loqate API key and you’re up and running with a Postcode Lookup autocomplete field that you can put in any form on your site.

But that's not the end of it…

Going forward we are planning to add:

Integration with the Address module: This could have extensive use-cases, as this module is used heavily by the Commerce module and would make setting addresses for buying things significantly easier.

The ability to automatically change required fields based on a user's location, to improve international experience.

Toggles for manual entry vs postcode entry.

General code improvements, for better maintainability and more use-cases.

Last weekend I attended my first ever Drupal Sprint organised by NWDUG.

My background with Drupal is slightly unconventional: as a newbie to Drupal, who only became aware of its existence just under a year ago, I’ve had a fast-track journey with the software whilst working at CTI Digital. I don’t have an in-depth knowledge of the technical side of Drupal, but as I work in Marketing, exploring the history and culture around Drupal, I’m quickly becoming a bit of an expert. Therefore, attending my first sprint was a great insight into the world of Drupal.

The May Sprint Team

The sprint was organised by NWDUG and we were happy to offer up our office space, as a large space with comfy seats and plentiful coffee is something always appreciated at a sprint. The space is actually open for anyone to use, so if you’re interested in holding a Drupal event, please get in touch any time.

Here’s what went on at the sprint:

9:30am

Everyone arrived, chipper and excited for the day. The first thing I noticed was how friendly and welcoming everyone was, even though I obviously wasn’t your standard Sprint attendee. Coffee and some light bites were shared out before we headed to our event space, named ‘The Garden’, for the welcome brief, given by Phil Norton.

Phil talked us through what would happen throughout the sprint and took special care to ensure first-time sprinters were set up and knew how to get involved. There were also a few non-technical activities, for newbies like me to get involved with.

Phil's welcome address in 'The Garden'

10am

The Sprint begins! Those interested in doing some development discussed which issues they’d like to get involved with, then broke into teams and got to work. Again, I was delighted to see just how engaged all the first-time sprinters were; no-one was left confused or overwhelmed by the Sprint.

11am

A few of us broke off into a Case Study Workshop. Working in Marketing, I’m a big fan of a beautifully written case study, so we created a task force to review how we can encourage more members of the Drupal community to celebrate their work within the Drupal case study section. We used the AIDA model to break down the journey of writing a case study for developers and marketers. Then, we discussed the blockers and opportunities at each stage.

The case study workshop in full swing

Lunch!

Pizza and more pizza! After a busy morning everyone had a break to eat pizza, play pool, and socialise. Thank you to Access, for co-providing the pizza with us. There was also time for a quick group photo and an impromptu dance break, where a mini sprinter taught the developers how to do The Floss. Unfortunately no future ‘Britain's Got Talent’ winners were discovered, but everyone definitely enjoyed themselves!

[embedded content]The Drupal Floss

1pm

Back to sprinting: the developers resumed their issue teams and the second non-technical activity took place. Paul Johnson took video interviews, detailing the origin stories of how the attendees got involved with Drupal in the first place. Members of the sprint group discussed how Drupal has changed their lives, something that Rakesh recently delved into on our blog. It was inspiring to hear the developments of personal stories and journeys with Drupal.

Post lunch sprinting

3pm

Before we knew it, the sprint was over! In summary: it was a brilliant day for technical and non-technical individuals alike. Afterwards a few of the group went for some celebratory drinks to soak up the success of the day.

What did we achieve?

A significant development into the accessibility of case study writing

The capture and documentation of the origin stories of multiple Drupal advocates

Special Thanks

Finally, I’d like to take some time to give special thanks to a few individuals on the day:

Our MVPs - Gemma Butler and Craig Perks

Gemma and Craig came down to keep the day running smoothly and it couldn’t have happened without them. From first aid to countless other essential roles, Gemma and Craig really made the day what it was and we couldn’t say thank you enough!

Rakesh James

Rakesh got the ball rolling for this sprint in the first place and was the driving force in helping it happen. Thank you Rakesh and hopefully this isn’t the last time you’ll be making something like this happen.

Phil Norton

Phil heads up the North West Drupal User Group and provided the community to form the sprint of such a welcoming and successful group of multi-talented individuals. So thank you Phil for such a great day!

And thank you to everyone else who attended;

Peter Jones

Des Fildes

Nigel White

James Davidson

Lesley Moreira

Tony Barket

Richard Sheppard

Phil Wolstenholme

Steve Hilton

Syed Huda

John Cook

Daniel Harper

Andrew Macpherson

Rachel Lawson (From The Drupal Association)

Craig Perks

Michael Trestianu

Paul Johnson

Andrew J

Interested in attending the next Drupal Sprint? Follow and on twitter to hear about the next one.

Drupal 8.5 wasreleasedon the 7th of March 2018 with a host of new features, bug fixes, and improvements. There are plenty of exciting updates for developers in this blog. Or if you're a business owner, click here to find out what this means for you.

Any projects using Drupal 8.4.x can and should update to Drupal 8.5 to continue receiving bug and security fixes. We recommend using composer to manage the codebase of your Drupal 8 projects.

For anyone still on Drupal 8.3.x or earlier I recommend reading the Drupal 8.4.0 release notes as Drupal 8.4 included major version updates for Symfony, jQuery, and jQuery UI meaning it is no longer compatible with older versions ofDrush.

One of the great things we noticed from the update was the sheer number of commits in the release notes.

Seeing all the different issues and contributors in the release notes is a good reminder that many small contributions add up to big results.

So what are the highlights of the Drupal 8.5 release?

Stable Releases Of Content Moderation And The Settings Tray Modules

One of the changes to the way Drupal is maintained is the new and improved release cycle and adoption of semantic versioning. Major releases used to only happen once every couple of years, Drupal now uses a much smaller release cycle for adding new features to core of only 6 months. New features are added as “experimental core modules” and can be tested, bug fixed and eventually become part of Drupal core.

One example of the shorter release cycle is the BigPipe module. The module provides an implementation of Facebook’s BigPipe page rendering strategy, shortening the perceived page load speed of dynamic websites with non-cacheable content. This was an experimental module when Drupal 8.1 was released and became a part of Drupal core as a stable module in 8.2.

In Drupal 8.5 the BigPipe module is now enabled by default as a part of Drupal’s standard installation profile. BigPipe is actually the first new feature of Drupal 8 to progress from experimental to stable to being a part of a standard installation profile.

There are two exciting modules now stable in the update, they are:

Settings Tray

Content Moderation

Settings Tray is a part of the “outside-in” initiative where more of the content management tasks can be done without leaving the front end of the website, managing items in context such as editing the order of the menu items in a menu block.

The Content Moderation module allows the site builder to define states in which content can be placed such as “draft”, “needs review” and to define user permissions necessary to move content between those states. This way you can have a large team of authors who can place documents into draft or needs review states, allowing only website editors with specific permissions to publish.

New Experimental Layout Builder

Sticking with experimental modules, Drupal 8.5 sees the introduction of a new experimental layout builder. This module provides the ability to edit the layouts of basic pages, articles and other entity types using the same “outside-in” user interface provided by the settings tray.

This allows site builders to edit the layout of fields on the actual page rather than having to use a separate form in the backend. Another feature is the ability to have a different layout on a per-page / item basis if you so wish with the ability to revert back to the default if it doesn’t work for you. There’s still a long way to go and is currently only a basic implementation but it should be improving significantly over the coming months and hopefully will see a stable release in Drupal 8.6.

The experimental layout builder in action

PHP 7.2 Is Now Supported

This is the first version of Drupal to fully support the latest version of PHP. Support is not the only aspect of this release though, site owners are now also warned if they try to install Drupal on a version of PHP less than 7.0 they willno longer be supportedby Drupal as of March 7, 2019.

Drupal 8.5 now also uses Symphony Components 3.4.5 since Symfony 3.2 no longer receives security coverage. I expect Drupal 8 to remain on 3.4 releases until late 2021 or the end of Drupal 8's support lifetime (whichever comes first). Finally, PHPUnit now raises test failures on deprecated code.

Media Module In Core Improved And Now Visible To All Site Builders

Drupal 8.4 added a Media API into core which was based on all the hard work done on the contributedMedia Entity Module. The media module provides “media types” (file, audio, video, and image) and allows content creators to upload and play audio and video files and list and re-use media. The core media module can be expanded by the installation of key contributed modules which add the ability to add externally hosted media types such as YouTube and Vimeo videos.

The module has been present in the codebase but was hidden from the module management interface due to user experience issues. These issues have now been taken care of and anyone who has access to the module management page can now enable the module.

New “Out of the Box” Demo Site

One of the key initiatives is the “out of the box experience”. The aim is to showcase what Drupal can do by providing a simple to install demo website (called Umami presently) with example content, configuration, and theme.

To add sample content presented in a well-designed theme, presented as a food magazine. Using recipes and feature articles this example site will make Drupal look much better right from the start and help evaluators explore core Drupal concepts like content types, fields, blocks, views, taxonomy, etc.

The good news is that Drupal 8.5 now comes with the demo website available as an installation profile. The profile is “hidden” at the moment from the installation GUI but can be installed using the command line / drush.

The demo website still needs a lot of work but the groundwork is firmly in place and may become selectable as an installation profile for demonstration and evaluation purposes in a future release of Drupal 8.5.x. I recommend users not to use the Umami demo as the basis of a commercial project yet since no backward compatibility or upgrade paths are provided.

This item almost deserves its own blog post as it’s such a major milestone for Drupal, with over 570 contributors working on closing over 1300 issues over a 4 year period. As such the Migrate system architecture is considered fully stable and developers can write migration paths without worrying about the stability of the underlying system.

The Migrate Drupal and Migrate UI modules (which are used for Drupal 6 and 7 migrations to Drupal 8) are also considered stable for upgrading sites which are not multilingual, with multilingual support still being heavily worked on.

There is also support for incremental migrations meaning that the website can be worked on while the content is still being added on the site being upgraded/migrated from.

Links to Drupal 8 User Guide

Now on a standard installation you are greeted with a welcome page and a link to the new and improvedDrupal 8 User Guide. While only a small addition, we can see this as a major win as it will improve the evaluation experience for new users.

This will require 2 core branches to be supported at once and additional work for core committers. However, new features and bug fixes will be available sooner so it will be interesting to see what the outcome of the proposal is.

What Does This Mean For Business Owners?

You’ll need to ensure you’ve updated your site from Drupal 8.4.5 to 8.5.0 to continue receiving bug and security fixes. The next of which is scheduled to be released on April 4th, 2018. If, however, you are on Drupal 8.3.x and below we urge you to read therelease notes for Drupal 8.4.0as there were some major updates to consider. These include a jump from jQuery 2 to 3 which may have some backward compatibility issues affecting any slideshows, carousels, lightboxes, accordions and other animated components.

Drupal 8.4 also dropped support for Internet Explorer 9 and 10 where ay bugs that affect these browsers will no longer be fixed and any workarounds for them have been removed in Drupal 8.5.

If your website is still on Drupal 7 then this is a good time to consider migrating to Drupal 8 as the hard work carried out on the migrate modules mentioned above will streamline the process of adopting the new platform.

If you have any questions about migrating your Drupal 7 website to Drupal 8 please let us know and we'll ensure one of our experts are on hand to help.

During the CXO day at Drupalcamp London, Dave O’Carroll the Head of Digital at War Child delivered a compelling speech on how Drupal has aided their mission in supporting the future and well-being of children living in some of the world’s most dangerous war zones.

When Warchild UK began to feel their website could no longer facilitate their day to day needs they began to consider a Drupal rebuild or even using an alternative technology. The existing Drupal platform was unfriendly towards images and so couldn’t reflect their work on the ground in its true light. Being unresponsive was also a major issue for the site.

After conducting research and consulting with peers, War Child UK came to the conclusion that Drupal still remained far above the rest in aiding the charity to continue their work and simply needed an update to meet their evolving needs.

When the time came for us to replace our website we were open to using different systems. But it soon became obvious that Drupal would remain the right choice

Dave O'Carroll

When making the decision to stay with Drupal, 4 key areas were turning points in confirming their decision.

1. Compatibility

War Child UK are acutely aware of the world of software solutions out there. Despite the natural desire to focus on having an aesthetically pleasing website, the websites ability to seamlessly take on integrations like MailChimp, Stripe, and SalesForce was deemed essential. As most of these software APIs and plugins are Drupal friendly, sticking with Drupal in this regard was a no-brainer.

The team at War Child UK dedicate themselves to changing the lives of children and spending as much time and money out on the field as possible. Being a charity, they also have to provide a great deal of accountability on where their money comes from and where it goes, so investment in digital can be incredibly difficult to justify. But by using Drupal, its compatible nature means the charity can spend more resources on helping children, not conducting systems integrations.

Having done this many times before, I knew the best websites are the ones that play nice with the other children - they integrate well.

Dave O'Carroll

2. Ease of use

War Child needed to give content creators the independence to upload their own stories so their messages could be told from the heart, and not dilluted by multiple teams. If they were able to train staff to directly upload content, War Child's work would be able to be projected in near real time.

Dave explained, with previous experience of Wordpress and Squarespace at other charities he had found the staff would receive training but come back repeatedly to clarify how to perform daily tasks. The simple intuitive administration screens we configured for War Child meant that, with Drupal, staff needed to be shown just once. This saves War Child time, and time saves money.

Our HR team, who don’t spring to mind as digital experts, are able to manage their own site section. It’s great they are able to have a degree of freedom.

Dave O'Carroll

3. Support

The flexibility of Drupal provides support for all of War Child’s goals. War Child needs to be more flexible and creative to stand alongside larger charities with far bigger communications teams and marketing resource. The vast community surrounding Drupal means that no matter how improbable an idea appears to be, the community always manages to push up gems to make an idea reality.

With a big fat creative idea, there always seems to be a way to do it with Drupal

Dave O'Carroll

4. Future Proofing

What if I get hit by a bus? A concerning idea, but something that applies to War Child UK immensely. With thousands of children relying on the charity, they can't afford to not plan for the ‘what ifs’. Drupal's intuitive CMS already makes it easy to pick up where the last person left off. We crafted a solution to take this capability further and built a system to the best possible standards. This stronger governance means if War Child ever need to move agencies, replace key team members or work with freelancers the continuity will still be there and save them time and money, allowing War Child to focus on their mission.

Conclusion

All too often children are portrayed as the collateral damage of war. War Child wanted their site to portray a different story and so we implemented designs that placed children at the heart of the new website, you can read the full website case study here. The new platform allows War Child to overcome past restraints and think outside the box for future campaigns. We look forward to continuing to help those at War Child to support children in new innovative ways for years to come. One of their recent campaigns ‘Robot’ has been particularly moving, please watch the video below.

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems -but- there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that wearedoing and assesswhywe think it’s working. If you want a high-level bird's-eye view, check out ourprocess page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind.

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems -but- there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that wearedoing and assesswhywe think it’s working. If you want a high-level bird's-eye view, check out ourprocess page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind.

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation

During the redesign process of a website, there are many small changes that can ultimately affect the traffic of the new site. The key is to identify any changes that might break SEO, or changes that might affect the way the site looks to search engine spiders ahead of time to avoid traffic drops. In the end, we want the site to look fresh and new while still getting the same traffic, or more, as the old design.

At Zivtech, we look at many factors in the planning phase of a website redesign project and try to identify those that could cause drops in traffic after the new design is launched. Once these have been identified, we ensure all of these tasks have been completed before launch. Let’s take a look at some of these factors and how to avoid traffic drops on your next website redesign project.

Meta Tags

We typically build sites with Drupal, so the Metatag module handles much of the meta tag configuration and display on the site. If you aren’t using Drupal though, there could be some changes to your front-end design that could affect your meta tags and confuse search engine spiders. You’ll need to make sure that all of your pages have meta tags and that there aren’t any duplicates.

Broken Links

Broken links are a huge problem during website redesigns. This could be a result of changes in the menu structure or in path structures for content types. Broken links mean that users and search engines can’t find the pages they’re looking for, which can really wreak havoc on your site traffic statistics.

To avoid broken links in Drupal, we can use the Link checker module, but there are also third party tools that can be used for non-Drupal sites. Google Search Console provides some additional tools to identify broken links and 404 pages too.

URL Redirects

Broken redirects or missing redirects to new URLs are also a big problem on site redesigns. These typically happen due to changes in URL patterns or menu structures. The Redirect module in Drupal provides an interface to add redirects for your pages without any coding experience. Non-Drupal sites can use .htaccess files or redirect statements in their web server configuration to ensure that all URLs that are changing have proper 301 redirects.

XML Sitemap

As URLs are changed on your site during a redesign, you’ll want to ensure that the XML sitemap has updated URLs that match the new ones. The XML sitemap module handles this for us on a Drupal site.

If you aren’t running Drupal, a plugin for your CMS may handle this, or you’ll need to generate a new sitemap using third party tools. Once this has been completed, you can log in to Google Search Console and resubmit your sitemap for indexing.

Google Analytics

If you forget to place your Google Analytics tracking code in your new site’s markup before launch, you can end up in the dark when it comes to traffic fluctuations. The Google Analytics module handles the placement of this tracking code on a Drupal website, and even provides a status warning on the Drupal status page if the tracking ID has not been configured yet.

Those who aren’t using Drupal should follow the instructions provided by Google Analytics to place the code snippet in their site’s markup, or use a plugin provided by their CMS of choice. With the Google Analytics tracking code in place, your organization can get a much better overview of how your site performs after the redesign launches. It’s much easier to track your successes or failures in your redesign if you were already running Google Analytics, but a relaunch is a great time to start using it too.

While the factors in this post are some of the most important that we look at during a site redesign project at Zivtech, each project is unique and could require additional changes to your site to ensure you avoid traffic drops after launching your new design.

Overall, you want to identify any changes that could affect URLs, meta data, and even content structure that search engine spiders or your visitors might be confused about. Even small changes or a missing meta tag can affect your search engine rankings, which can lead to traffic drops. Do your future self a favor and make a list of the individual factors that could affect your site. Then ensure that list is completed before calling your next website redesign a success.

Do you need an intranet?

We’ve used Drupal to build intranets that securely keep internal content and documents for staff eyes only. Drupal has an abundance of community features that make it easy to have wikis, commenting, user profiles, and messaging. Many organizations we’ve worked with integrate their intranet with their LDAP or other Single Sign On system.

Radial's intranet allows team members to quickly locate information about co-workers

We’ve also used Drupal for our own intranet for the past eight years. Our intranet helps keep our internal knowledge base easy to access and organizes information like our servers, sites, clients, and projects.

Do you run on spreadsheets and email?

Some of the projects I’ve really enjoyed developing have used Drupal as a tool to increase the efficiency of critical business processes. Organizations tend to rely heavily on Excel or Google spreadsheets and email to manage information and communications. When your needs outgrow those tools, it’s time for a web application.

Conant's SmartDeps web application improved their workflow and allowed them to stop relying on email

Have you outgrown Excel?

Data organization needs often outgrow the spreadsheet sweet spot. Typically, you’ll see some of the following problems:

Versions of a spreadsheet are getting emailed around.

Mistakes are being made in a spreadsheet, causing serious problems.

Data has been deleted and lost.

To minimize mistakes, one person has been made the editor of the spreadsheet, bottlenecking the process.

Manual work is being done where it doesn’t need to be. For example, does someone check the spreadsheet every week and then manually send out emails based on information there?

A web application like Drupal stores data safely in a database and provides an interface for people to access and update the data in a much more controlled way than a spreadsheet can. You can decide who should be able to see the data (which is always up to date) and who should be allowed to edit it or delete it. You can also control data validation, greatly reducing mistakes in data entry. You can track changes to the data. You can make easy to read reports. You can create automated workflows, like sending automated emails for example.

The best part of all this? This type of development work typically costs much less than what you spent on your marketing website. Mainly that’s because you don’t need custom design work and implementation on this kind of tool.

Have you outgrown Email?

I have to admit, email is not my friend. I’m a project-based worker, and I need my written communication organized by project and by task. When I’m cc’ed on an email with ten people on it and the conversation veers from one topic to another while the subject line stays the same, I really can’t follow what’s going on. It seems as though the only way to keep up with these emails is to do nothing but tend to one’s email. And what happens when a new person is hired and all the history of your organization can be found only in old emails that they don’t have?

No organization can completely remove the reliance on email these days. But if you are running your business on email, a web application may be able to help.

Often a Software as a Service solution (SaaS) can help organize communication. For communications surrounding sales leads, Salesforce can help. For discussions, Slack. For project-based organization, Jira or Basecamp. But if you have a very specific process around some of your communications, a custom Drupal-based application can be a great fit.

Here are a few examples our team has worked on:

A legal printing company was relying on email to get work requests from customers, then emailing back and forth with estimates and questions. We built them a custom web application in which the customers enter in the request and the system organizes the process, greatly speeding up the work and automating many aspects.

A foundation was receiving grant applications by email and organizing the applicants and review process in Excel. We built a system to manage the entire process online.

Need help?

If you have a pain point at work that revolves around organization or access of information, an efficient solution that saves you time and money might be easier to come by than you think. Search for a SaaS product geared toward your needs, and if you find your needs are too unique for what’s out there, let’s talk about a custom web application.

You’re about to begin a huge overhaul of your higher education website and one of the first steps is choosing a content management system. It’s likely that Drupal and WordPress have come up in your research, and you may be trying to decide between the two.

Drupal and WordPress are often compared to one another because they’re both open source content management systems. Both are capable of creating clean, responsive websites that are easy to manage for content editors. The functionality of both can be extended using third party code. And the code for both is openly available for anyone to use, change, and distribute, meaning there are no licensing fees like those required by closed source options.

There are a significant number of higher education websites on Drupal; Harvard, Brown, and Oxford University all use the CMS, to name a few. According to Drupal.org, 71% of the top 100 universities use Drupal. And there’s some sound reasoning behind that.

Both WordPress and Drupal have strengths and are well suited for a diverse range of projects. WordPress was primarily built for standalone websites or blogs with minimal variation in content types. Drupal was built for more complex, feature rich websites that require significant interconnectivity between pages or site sections, like those required by higher education.

Here are some factors to consider when choosing between the two content management systems.

Complex Architecture

If you’re setting out to redesign a higher ed website, you’re likely looking at a fairly complex endeavor. Your website probably requires more complicated architecture than most; you’ll need various sections that are targeted toward different groups of users, such as prospective students, current students, alumni, faculty, and staff.

Drupal’s ecosystem was built around cases like these. It can handle thousands of users and different content types. Upgrades in Drupal 8 have also resulted in better caching features that make for improved page load times.

WordPress works well for general, standalone marketing sites, but it will struggle with aspects like multiple install profiles, content sharing networks, consistency, maintainability, and connectivity with other parts of the site.

Users and Permissions

Your website also most likely has extensive user and permission requirements. Different groups will need to perform different tasks and interact with the site in a variety of ways. You may also have different departmental sites that will need to be managed by different teams while staying consistent with branding guidelines.

Drupal allows for multi-site functionality that can also be centrally managed. Different users and departments can be given diverse permissions and roles so that you can limit their capabilities to just what they need and nothing more.

Security

No CMS is completely immune to security vulnerabilities. It’s possible that WordPress has had more security issues in the past simply due to the fact that it’s a more widely used CMS. WordPress relies heavily on plugins when used for more complex websites, and these plugins are often susceptible to security issues.

Drupal is well known as a very secure content management system and is trusted by WhiteHouse.gov and other federal government sites. Drupal has a dedicated security team that receives security issues from the general public and coordinates responses. Issues are resolved as quickly as possible and users are alerted to vulnerabilities through regular announcements. The security team also provides documentation on how to write secure code and how to secure your site. With these practices, you can rest assured that all of your student and faculty data would be protected.

Ease of Use

For simpler sites, WordPress beats Drupal when it comes to ease of use. Because it was developed for less complex, standalone websites, it’s very easy to get it up and running, even for those who aren’t very tech savvy. Drupal’s complexity means it has a steep learning curve and takes longer to build.

Drupal is a feature-rich CMS that can build more advanced sites, but it also requires more technical experience. You need a team of experts with ample experience to get your project accomplished, and this is likely to be more expensive than a team of WordPress developers.

But the extra price that you pay for a team of experts will pay off in the end when you have a website that is capable of doing everything you need it to. Drupal's high barrier to entry with respect to module development also means the quality of modules available is higher, and the choices are fewer but more obvious.

Which Should You Choose?

When it comes down to it, Drupal is likely the better choice between the two. It’s clear that while WordPress has its strengths, Drupal is a better choice for more advanced sites, like those required by higher education.

Drupal provides a strong base to begin rapidly building a complex system. It’s often the CMS of choice for large websites that require significant interconnectivity between different sections. It also allows for a wide range of user roles and permissions, and security is a priority for the entire community. All of these aspects make it a great CMS choice for higher education websites.

Computers are finicky. As stable and reliable as we would like to believe they have become, the average server can cease to function for hundreds of different reasons. Some of the common problems that cause websites or services to crash can’t really be avoided. If you suddenly find your site suffering from a DDOS attack or a hardware failure, all you can do is react to the situation.

But there are many simple things that are totally preventable that can be addressed proactively to ensure optimal uptime. To keep an eye on the more preventable issues, setting up monitoring for your entire stack (both the server as well as the individual applications) is helpful. At Zivtech, we use a tool called Sensu to monitor potential issues on everything we host and run.

Sensu is a Ruby project that operates by running small scripts to determine the health of a particular application or server metric. The core project contains a number of such scripts called “checks.” It’s also very easy to write custom checks and they can be written in any language, thus allowing developers to easily monitor new services or applications. Sensu can also be run via a client server model and issue alerts to members of the team when things aren’t behaving properly.

Server checks

As a general place to start, you should set up basic health checks for the server itself. The following list gives you a good set of metrics to keep an eye on and why it is in your best interest to do so.

RAM

What to check

Monitor the RAM usage of the server versus the total amount of RAM on the server.

Potential problem monitored

Running out of RAM indicates that the server is under severe load and application performance will almost certainly be noticeable to end users.

Actions to take

Running low on RAM may not be a problem if it happens once or twice for a short time. Sometimes there are tasks that require more resources and this may not cause problems, but if the RAM is perpetually running at maximum capacity, then your server is probably going to be moving data to swap space (see swap usage below) which is much slower than RAM.

Running near the limits of RAM constantly is also a sign that crashes are eminent since a spike in traffic or usage is surely going to require allocating resources that the server simply doesn’t have. Additionally, seeing spikes in RAM usage may indicate that a rogue process or poorly optimized code is running, which helps developers address problems before your users become aware of them.

Linux swap usage

What to check

Check swap usage as a percentage of the total swap space available on a given server.

Potential problem monitored

When the amount of available RAM is running short or the RAM is totally maxed out, Linux moves data from RAM to the hard drive (usually in a dedicated partition). This hard drive space is known as swap space.

Generally, you don’t want to see too much swap space being used because it means that the available RAM isn’t enough to handle all the tasks the server needs to perform. If the swap space is filled up completely, then it means that RAM is totally allocated and there isn’t even a place on disk to dump extra data that the system needs. When this happens, the system is probably close to a crash and some services are probably unresponsive. It can also be very hard to even connect to a server that is out of swap space as all memory is being used completely at this point and new tasks must wait to run.

Actions to take

If swap is continually running at near 100% allocation, it probably means the system needs more RAM, and you’d want to increase the swap storage space as part of this maintenance. Keeping an eye on this will help ensure you aren’t under-allocating resources for certain machines or tasks.

Disk space

What to check

Track current disk space used versus the total disk space on the server’s hard drives, as well as the total inodes available on the drives.

Potential problem monitored

Running out of disk space is a quick way to kill an application. Unless you have painstakingly designed your partitions to prevent such problems (and even then you may not be totally safe), when a disk fills up some things will cease working.

Many applications write files to disk and use the drive to store temporary data. Backup tasks rely on disk space as do logs. Many tasks will cease functioning properly when a drive or partition is full. On a website running Drupal, a full drive will prevent file uploads and can even cause CSS and JavaScript to stop working properly as well as data not being persisted to the database.

Actions to take

If a server is running low on space, it is relatively easy to add more. Cloud hosting providers usually allow you to attach large storage services to your running instance and if you use traditional hardware, drives are easy to upgrade.

You might also discover that you’ve been storing data you don’t need or forgot to rotate some logs which are now filling up the drive. More often than not, if a server is running out of space, it is not due to the application actually requiring that space but an error or rogue backup job that can be easily rectified.

CPU

What to check

Track the CPU usage across all cores on the server.

Potential problem monitored

If the CPU usage goes to 100% for all cores, then your server is thinking too hard about something. Usually when this happens for an extended period of time, your end users will notice poor performance and response times. Sites hosted on the server might become unresponsive or extremely slow.

Action to take

In some cases, an over-allocated CPU may be caused by a runaway processes but if your application does a lot of heavy data manipulation or cryptography, it might be an indication that you need more processing power.

When sites are being scraped by search providers or attacked by bots in some coordinated way, you might also see associated CPU spikes. So this metric can tip you off to a host of issues including the early stages of a DDoS attack on the server. You can respond by quickly restarting processes that are misbehaving and blocking potentially harmful IPs, or identify other performance bottlenecks.

Reboot required

What to check

Linux servers often provide an indication that they should be rebooted, usually related to security upgrades.

Potential problem monitored

Often after updating software, a server requires a reboot to ensure critical services are reloaded. Until this is done, the security updates are often not in full effect.

Action to take

Knowing that a server requires a reboot allows your team to schedule downtime and reduce problems for your end users.

Drupal specific checks

Zivtech runs many Drupal websites. Over time we have identified some metrics to help us ensure that we are always in the best state for security, performance, search indexes, and content caching. Like most Drupal developers, we rely on drush to help us keep our sites running. We have taken this further and integrated drush commands with our Sensu checks to provide Drupal specific monitoring.

Drupal cron

What to check

Drupal’s cron function is essential to the health of a site. It provides cleanup functions, starts long running tasks, processes data, and many other processes. Untold numbers of Drupal’s contributed modules rely on cron to be running as well.

Potential problem monitored

When a single cron job fails, it may not be a huge problem. But the longer a site exists without a successful cron run, the more problems you are likely to encounter. Services start failing without cron. Garbage cleanup, email notifications, content updates, and indexing of search content all need cron runs to complete reliably.

Action to take

When a cron job fails, you’ll want to find out if it is caused by bad data, a poorly developed module, permissions issues, or some other issue. Having a notification about these problems will ensure you can take proactive measures to keep your site running smoothly.

Drupal security

What to check

Drupal has an excellent security team and security processes in place. Drush can be used to get a list of modules or themes that require updates for security reasons. Generally, you want to deploy updates as soon as you find out about them.

Potential problem monitored

By the time you’ve been hacked, it’s too late for preventative maintenance. You need to take the security of your site’s core and contributed modules seriously. Drupal can alert site administrators via email about security alerts, but moving these checks into an overarching alerting system with company wide guidelines about actions to take and a playbook for how to handle them will result in shorter times that a given site is vulnerable.

Action to take

Test your updates and deploy them as quickly as you can.

Be Proactive

It’s difficult to estimate the value of knowing about problems before they occur. As your organization grows, it becomes more and more disruptive to be dealing with emergency issues on servers. The stress of getting a site back online is exponentially more than the stress of planned downtime.

With a little bit of effort, you can detect issues before they become problems and allocate time to address these without risking the deadlines for your other projects. You may not be able to avoid every crash, but monitoring will enable you to tackle certain issues before they disrupt your day and will help you keep your clients happy.

It’s no secret that Drupal is incredibly powerful when it comes to its capabilities as a content management system. Many businesses choose Drupal not only because of the lower cost that comes with it being open source, but also because it can be customized to build out the exact features that they need. But how does it stack up for the content editors?

At Zivtech, we recently launched our new Drupal 8 site. As a member of the marketing team, I spend a significant amount of time writing and editing our site’s content, whether that’s static services page content, upcoming events, or new blog posts. Needless to say, I want the experience to be intuitive, very user-friendly, and headache free. And I definitely don’t want to have to bother our developers with questions about where to find things or how to make edits.

Drupal 8 has seen improvements in a lot of areas when compared to Drupal 7, and content editing is no exception. Here are some of my favorite content editing improvements in Drupal 8.

A WYSIWYG is Included in Core

Drupal is comprised of its core codebase, and then thousands of modules that can be added for custom functionality. A WYSIWYG (meaning “what you see is what you get”) is used to provide standard text editing options when adding content across an entire site, and used to require separate module installation in Drupal 7.

WYSIWYGs are useful for a number of reasons. A key objective when using a CMS is that the content will always come out looking uniform. Fonts, sizes, headings, and colors should always be consistent. A uniform site looks cohesive and makes sense. If you have the option to choose a size, font, and color for each specific post, each individual post may look fine, but the site as a whole will look chaotic. You can provide users with all the options that they need in about fifteen buttons.

The WYSIWYG CKEditor was finally added to Drupal core with the release of Drupal 8, which is a huge improvement. You no longer need to install a separate module for this functionality.

Resizing Images is as Easy as Pie

Adding images to blog posts used to be a bit of a struggle for me because I couldn’t resize them once I uploaded them into the body of the post. Our new D8 site allows me to easily drag the corner of an image to resize. This is a major time saver and also no longer makes me want to pull my hair out.

I resized this image of an image. It was super easy.

Quicker Edits with Quick Edits

Drupal 8 also offers the option for quick editing. This means that content editors no longer have to navigate to the node edit page to make a quick change. Edits can be made directly from the view of the content.

User Roles and Permissions (Still)

While it’s not new with Drupal 8, user roles and permissions are still worth mentioning. Drupal offers the ability to pick and choose which site users get which types of permissions. This means you can restrict access to less tech savvy people so that they don’t accidentally break something. Content editors can just worry about editing content.

War Child UK describes itself as “a small charity that protects children living in some of the world's most dangerous war zones”. Founded in 1993, the charity works with vulnerable children and communities in war torn areas; providing safe spaces, access to education, skills training and much more to ensure that children’s lives are not torn apart by war.

War Child International has multiple offices all over the world, protecting, educating, and empowering marginalised children.

The Short Brief

The UKteamcame to us with a mission: to get a new site live as soon as possible. They’d done the planning and got the design; we needed to implement it quickly.

The previous War Child UK site wasn’t responsive, so they felt there were missing many donation opportunities from supporters on mobile and tablet devices. They also wanted a bold new design, with a high focus on imagery, for the highest possible impact and to create empathy.

From a Content Management System (CMS) point of view, they wanted to rationalise and consolidate content types, so the site could be easily managed without large overheads. Every penny counts for charities, and the website should enable them to get information and campaigns out quickly and easily, without a large website team or dependency upon external agents.

The original site also integrated with Stripe and Salesforce, and they wanted to ensure this functionality remained to provide a seamless experience for their donors, and reduce the need for internal training on the new site.

They also had a deadline: they wanted the have the new site up and running in time for a new donation drive for Christmas.

We sat down and crunched some numbers. We calculated that we could have one three week sprint before we had to deploy to live. We got to work.

‘good enough and live’ is better than perfect but in development for months.

How?

So, that was our mission: To build and launch aMinimum Marketable Product (MMP)with donations, and a simplified CMS in a single three week sprint. It had to be a polished product that would serve them during one of their busiest periods, without a loss of critical functions, such as donations. A MMP achieves the earliest possible route to market and therefore brings benefits sooner without cutting corners. With a tight deadline this approach was right.

Their previous site was Drupal, which we have a lot of experience with. There are many pre-existing modules available, such as Stripe and Salesforce that meant we could produce what they needed without much customization, which freed the team up to focus on providing the best site we could

We knew we had to get things right first time - we didn’t have time for cycles of iterations or lots of bugs back from the client. We always aim for right first time, but with this deadline it was imperative. We started writing Acceptance Criteria.

Acceptance Criteria is a method of writing out conditions that the software must meet to be accepted by the client, or users, or other stakeholders. They’re written in non-technical terms, using terms specific to the business or organisation in question. They are the one version of the truth of what we’re building, understood and signed off by the lead developer, the QA person, and the client.

The acceptance criteria combined with the designs meant we had a clear vision of what each piece of functionality should do and look like before we started work, and the client knew what to expect from our work before seeing it.

The client was on board with the MMP model, realising that:good enough and live’ is better than perfect but in development for months.We started planning the next two sprints while development work was happening on the first sprint, so War Child’s stakeholders had continued visibility of what was future work was, and access to a backlog for long term future planning.

[embedded content]

The small number of essential features meant we could still be flexible in what we offered in the first sprint - even bringing in small features from the backlog when we realised it would be useful for go live and that it would fit in our time scales.

We learned a lot from this small, fast project: that an MMP site is possible in weeks not months, but that requires forward planning, flexibility, and buy-in from all members of the team. We learned a lot technologically as well, integration with Stripe and Salesforce and customising it for War Child UK’s set up.

War Child UK now have in place a mobile donation platform underpinned by a powerful Drupal CMS. This is just the beginning. Thanks to Drupal’s extensibility and with our Agile process there are a series of additional features in the pipeline, all aimed at placing War Child in a better place to help children in crisis.

War Child UK now have in place a mobile donation platform underpinned by a powerful Drupal CMS. This is just the beginning. Thanks to Drupal’s extensibility and with our Agile process there are a series of additional features in the pipeline, all aimed at placing War Child in a better place to help children in crisis.

Mosul appeal:War Childhave a team on the ground right now providing safe spaces and emergency care for 6,000 boys and girls who have fled the city.

Maintainable and Readable Gulp tasks

With any mid-to-large sized Drupal 8 theme, it's really easy for the main Gulp file (gulpfile.js) become unwieldy and complex. With dozens of tasks doing all kinds of automated work, before too long, gulpfile.js becomes a soup of illegible code.

Additionally, members of your team might have different ways of naming Gulp tasks. One person might write a Sass building task called "buildSass" and another might create an identical task called "css."

It'd be nice to strip down gulpfile.js, make it readable, and somehow compartmentalize each task separately. Also, we want to cut down on task naming variations and create a unified system for structuring our tasks.

My current favorite way to handle these wishes is gulp-require-tasks. Basically, each task is written as an individual, CommonJS style module. Then, the tasks are arranged in directories, and that directory structure defines the task name. It is a very simple and predictable way to setup Gulp tasks.

The YAML settings file, default.gulpfile.yml, was discussed in the last post of this series, if you need a refresher.

gulp-require-tasks lets these tasks be accessible according to their structure. For example, to build the styles, you'll run "gulp styles:build" and to lint the JavaScript, you'll run "gulp scripts:lint." If you don't like the colon delimiter, you can change that too.

Update Gulp settings

In the last post we started the default.gulpfile.yml, and now we'll edit that same file to add in settings for the Gulp tasks we'll create in this project.

Open the file: it should look like this:

themeName: "myTheme" themeDescription: "myTheme description"

Expand on that by adding settings for source and destination paths of Sass and JS:

Under the "styles" and "scripts" sections of the YAML, you can see I added some linting options too. From within the YAML settings, people can enable or disable linting, and also decide if they want the Gulp process to stop when linting errors are detected.

Pulling these settings out of the Gulp tasks themselves and into this YAML file means that developers don't have to search through the tasks looking for settings to change. Instead, they have every setting exposed to them in this one, concise file.

Importing tasks for Gulp

We haven't written any Gulp tasks yet, but we can go ahead and setup importing them so they can be used.

Open up the gulpfile.js we started in the last post. It should look like this:

If you recall, we loaded the default.gulpfile.yml and overrode that with any settings from gulpfile.yml if it exists. The gulpfile.yml file has the exact same structure has default.gulpfile.yml, but settings can have different values. This lets other developers on the team override some settings if needed.

At this point in gulpfile.js, the config is loaded and ready to be used. Next, we integrate gulp-require-tasks.

Setting up gulp-require-tasks is super easy. We tell it where our gulp tasks are located, in the "gulp-tasks" directory.

Then, to each module (i.e. 1 module will be 1 Gulp task) in the directory, gulp-require-tasks passes arguments to each task. The first argument is always gulp itself. The "arguments" setting for gulp-require-tasks is an array of other things you want to pass to each module. I've opted to pass in "config," which is the object representing the settings merge in the YAML files.

This is essentially all you need in gulpfile.yml. However, I also like to add shortcut tasks too, that combine other tasks for quicker use. For example, general "build" and "lint" tasks might be like this:

Modular Gulp tasks

Let's start off creating the Sass linting task. To help with this, I recommend using gulp-sass-lint. You'll want to read over how to setup sass-lint, which I won't cover in detail here. Essentially, you create a .sass-lint.yml file in the root of the project. That file contains all the rules you want to validate; for example, should developers avoid styling with IDs or should they use RGB rather than HEX values for colors.

After sass-lint rules are in place, open up the styles linting file. Here you'll see the guts of the linting task:

For the three required packages, you'll want to "npm install" them of course. Don't forget the "--save-dev" flag to get those packages stored in package.json!

The bulk of the code exists within the standard, CommonJS "module.exports" directive. A Gulp process is passed into the task as well as the set of options from default.gulpfile.yml.

We start off by running a quick if/else check so that we short-circuit out of this task if the user disabled Sass linting. Then, we pipe in the files that we selected in the Gulp settings' "styles.src" section. Files are then piped through gulp-cached, which keeps a list of the source files (and contents!) in memory. This makes the task faster.

Next, the styles are linted and the results are formatted and reported out to the console. Finally, we use gulp-if to determine if the Gulp process gets terminated now should there be linting errors.

The sky's the limit

I leave it as an exercise for the reader to go about developing the other Gulp tasks. In the next post, I'll go over some other, more complicated Gulp tasks to show more advanced usage. Until then, you're more than welcome to look over and reference our own Gulp tasks we publish for Bear Skin.

Gulp is a mainstay of front-end development nowadays. Of course, like all front-end development tools, there is a massive proliferation of build systems, from Webpack to SystemJS and Grunt to Gulp. Yet, we at Zivtech find ourselves using mostly Gulp, particularly when dealing with Drupal 8 projects.

This article is the first of a series of posts where I outline how Zivtech uses Gulp. In this first part, I'll talk about our reasoning and setup process.

Why does Zivtech use Gulp for Drupal 8?

The choice of Gulp over other front end tools is due to how Drupal utilizes front-end assets. It's perfectly fine to use something like Webpack or Browserify with Drupal, but those all-encompassing, "build and combine all the things!" systems are best used for projects that don't have a built-in asset pipeline. For example, Drupal concatenates and minifies CSS and JS for us, and it's really just over-compiling (is that a word?) to use something that Drupal obviates.

Also, we use Gulp over Grunt or even Broccoli (because yes, that's a thing too) strictly because Zivtech does a lot of node.js development as well. The concept of streams and buffers in Gulp are used throughout node.js, and it makes sense that we'd align with our other development.

Many projects and distributed teams

As a client services company, Zivtech has many projects and several teams working on projects. Thus, our building tasks have to be somewhat abstract so as to apply to most situations. So the first step to conquering the Gulp pipeline is figuring out a way to make the tasks themselves static, but let the configuration remain changeable.

Some examples of these changeable settings include: the website address that Browsersync should proxy when watching your development. It's possible that this website address could change on a per-user basis too. Also, the website name would change on a per-site basis too.

Within each project, we could just alter the Gulp tasks directly to account for these differences. Yet some people on the team may not be too familiar with Gulp and you might be sending them into the weeds trying to suss out "that one weird setting" they should change.

At this point you might be thinking we should make a settings file for each project's Gulp tasks, and you'd be correct if so! The Gulp tasks remain the same, but the settings always change.

As it turns out, Drupal 8 has a preferred method for settings files: the YAML format. Being a flexible guy, I vote for just sticking with what the system wants. Thus, our new settings files will be written in YAML.

Using YAML for Gulp settings

First, let's think about how we're going to implement settings from a big picture perspective. We've already determined that we'll work in YAML and we'll have a default group of configuration settings available. We also want each member of the team to be able to override some settings to fit their situations.

It makes sense that we'll have a file called default.gulpfile.yml for the default settings. Gulp should merge another file, we'll call it gulpfile.yml, on top of the default. The default settings get tracked in Git or your chosen version control system, but the other one should not. This allows for complete flexibility of any setting you or one of your teammates might want.

In default.gulpfile.yml, start off by creating some basic settings:

themeName: "myTheme" themeDescription: "myTheme description"

Next, create a gulpfile.yml to contain your customized settings:

themeName: "myRenamedTheme"

When Gulp runs, the themeDescription setting should match default, but the themeName setting should be overridden.

Now, when you run any Gulp task, your config files will get merged by lodash. One day, Object.assign will be more widely available, and lodash won't be needed any longer. For now, things work fine this way.

You'll notice that loading the custom config is in a try ... catch block. We do that so there are no show-stopping errors if the custom config is not found. Additionally, if it's not found we can let the user know that only default settings are in use.

Wrapping up

Well, this has been a high-level explanation of how and why we use Gulp at Zivtech for D8 projects.

In the coming articles in this series, I'll expand on the simple gulpfile.js and default.gulpfile.yml files started. I plan to outline our process for linting and compiling CSS, linting and compiling JavaScript, and a couple extra tasks too, like integrating Bower and favicon generation. Until then!

Alan Stanley taught me this trick at an Islandora Camp a few years ago, and when trying to remember it this morning I messed up one critical piece. So I’ll post it here so I have something to refer back to when I need to do this again.

The Drupal Devel module includes a menu item for executing arbitrary PHP code on the server. (This is, of course, something you want to set permissions on very tightly because it can seriously wreck havoc on your day if someone uses it to do bad things.) Navigate to /devel/php on your Islandora website (with the Devel module enabled), and you’ll get a nice, big &lg;textarea> and an “Execute” button:

Execute arbitrary PHP using Drupal Devel module.

In this case, I’m generating the TECHMD datastream using the FITS module and displaying the results of the function call on the HTML page using the Devel module’s dpm() function:

Regardless of industry, staff size, and budget, many of today’s organizations have one thing in common: they’re demanding the best content management systems (CMS) to build their websites on. With requirement lists that can range from 10 to 100 features, an already short list of “best CMS options” shrinks even further once “user-friendly”, “rapidly-deployable”, and “cost-effective” are added to the list.

There is one CMS, though, that not only meets the core criteria of ease-of-use, reasonable pricing, and flexibility, but a long list of other valuable features, too: Drupal.

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

Unlike other content software, Drupal does not get in the way of SEO or social networking. By using a properly built theme–as well as add-on modules–a highly optimized site can be created. There are even modules that will provide an SEO checklist and monitor the site’s SEO performance. The Metatags module ensures continued support for the latest metatags used by various social networking sites when content is shared from Drupal.

E-Commerce:

Drupal Commerce is an excellent e-commerce platform that uses Drupal’s native information architecture features. One can easily add desired fields to products and orders without having to write any code. There are numerous add-on modules for reports, order workflows, shipping calculators, payment processors, and other commerce-based tools.

Search:

Drupal’s native search functionality is strong. There is also a Search API module that allows site managers to build custom search widgets with layered search capabilities. Additionally, there are modules that enable integration of third-party search engines, such as Google Search Appliance and Apache Solr.

Third-Party Integration:

Drupal not only allows for the integration of search engines, but a long list of other tools, too. The Feeds module allows Drupal to consume structured data (for example, .xml and .json) from various sources. The consumed content can be manipulated and presented just like content that is created natively in Drupal. Content can also be exposed through a RESTful API using the Services module. The format and structure of the exposed content is also highly configurable, and requires no programming.

Taxonomy + Tagging:

Taxonomy and tagging are core Drupal features. The ability to create categories (dubbed “vocabularies” by Drupal) and then create unlimited terms within that vocabulary is connected to the platform’s robust information architecture. To make taxonomy even easier, Drupal even provides a drag-n-drop interface to organize the terms into a hierarchy, if needed. Content managers are able to use vocabularies for various functions, eliminating the need to replicate efforts. For example, a vocabulary could be used for both content tagging and making complex drop-down lists and user groups, or even building a menu structure.

Workflows:

There are a few contributor modules that provide workflow functionality in Drupal. They all provide common functionality along with unique features for various use cases. The most popular options are Maestro and Workbench.

Security:

Drupal has a dedicated security team that is very quick to react to vulnerabilities that are found in Drupal core as well as contributed modules. If a security issue is found within a contrib module, the security team will notify the module maintainer and give them a deadline to fix it. If the module does not get fixed by the deadline, the security team will issue an advisory recommending that the module be disabled, and will also classify the module as unsupported.

Cloud, Scalability, and Performance:

Drupal’s architecture makes it incredibly “cloud friendly”. It is easy to create a Drupal site that can be setup to auto-scale (i.e., add more servers during peak traffic times and shut them down when not needed). Some modules integrate with cloud storage such as S3. Further, Drupal is built for caching. By default, Drupal caches content in the database for quick delivery; support for other caching mechanisms (such as Memcache) can be added to make the caching lightning fast.

Multi-Site Deployments:

Drupal is architected to allow for multiple sites to share a single codebase. This feature is built-in and, unlike WordPress, it does not require any cumbersome add-ons. This can be a tremendous benefit for customers who want to have multiple sites that share similar functionality. There are few–if any–limitations to a multi-site configuration. Each site can have its own modules and themes that are completely separate from the customer’s other sites.

Want to know other amazing functionalities that Drupal has to offer? Stay tuned for the final installment of our 3-part “Why Drupal?” series!

In April 2015, NASA unveiled a brand new look and user experience for NASA.gov. This release revealed a site modernized to 1) work across all devices and screen sizes (responsive web design), 2) eliminate visual clutter, and 3) highlight the continuous flow of news updates, images, and videos.

With its latest site version, NASA—already an established leader in the digital space—has reached even higher heights by being one of the first federal sites to use a “headless” Drupal approach. Though this model was used when the site was initially migrated to Drupal in 2013, this most recent deployment rounded out the endeavor by using the Services module to provide a REST interface, and ember.js for the client-side, front-end framework.

Leveraging the strength and flexibility of Drupal’s back-end to easily architect content models and ingest content from other sources. As examples:

Our team created the concept of an “ubernode”, a content type which homogenizes fields across historically varied content types (e.g., features, images, press releases, etc.). Implementing an “ubernode” enables easy integration of content in web services feeds, allowing developers to seamlessly pull multiple content types into a single, “latest news” feed. This approach also provides a foundation for the agency to truly embrace the “Create Once, Publish Everywhere” philosophy of content development and syndication to multiple channels, including mobile applications, GovDelivery, iTunes, and other third party applications.

Additionally, the team harnessed Drupal’s power to integrate with other content stores and applications, successfully ingesting content from blogs.nasa.gov, svs.gsfc.nasa.gov, earthobservatory.nasa.gov, www.spc.noaa.gov, etc., and aggregating the sourced content for publication.

Optimizing the front-end by building with a client-side, front-end framework, as opposed to a theme. For this task, our team chose ember.js, distinguished by both its maturity as a framework and its emphasis of convention over configuration. Ember embraces model-view-controller (MVC), and also excels at performance by batching updates to the document object model (DOM) and bindings.

In another stride toward maximizing “Headless” Drupal’s massive potential, we configured the site so that JSON feed records are published to an Amazon S3 bucket as an origin for a content delivery network (CDN), ultimately allowing for a high-security, high-performance, and highly available site.

Below is an example of how the technology stack which we implemented works:

Using ember.js, the NASA.gov home page requests a list of nodes of the latest content to display. Drupal provides this list as a JSON feed of nodes:

Ember then retrieves specific content for each node. Again, Drupal provides this content as a JSON response stored on Amazon S3:

Finally, Ember distributes these results into the individual items for the home page:

The result? A NASA.gov architected for the future. It is worth noting that upgrading to Drupal 8 can be done without reconfiguring the ember front-end. Further, migrating to another front-end framework (such as Angular or Backbone) does not require modification of the Drupal CMS.

In all, I flew 64,082 miles (103,130 kilometers for the metric fans in the audience), presented 29 times, with 13 distinct presentations at 20 conferences and user groups across 3 continents, and spent 82 days on the road (not counting non-conference travel). You know what that means?

It means I created about 10 metric tonnes of carbon pollution.

The downside of business travel

Jet fuel is a major contributor to greenhouse gas emissions. The more you fly, the more carbon dioxide and other waste gases you contribute to the atmosphere and the more we continue the downward spiral of human-created climate change. Flying is way worse than driving in that regard. Most people don't fly all that much but if you're a frequent conference-goer like I am (and like I know a great many of my friends and colleagues are) then air travel pollution is a significant contributor to us destroying our world.

I know some people have called for a reduction in air travel, powered by remote-conferencing technologies, but as anyone who has actually used them knows they are at best a very useful but poor substitute for in-person interaction. Humans are social beings and we are not going to stop traveling to spend time hanging out and learning from each other. That's a pointless battle to fight.

A partial solution

Fortunately, there is an alternative. Many companies offer "carbon credits". The basic idea is that if you generate 1 tonne of carbon dioxide, you invest in funding some project that will reduce overall carbon dioxide output (or equivalent from other greenhouse gases like methane) by an equivalent amount. That could range from reforestation efforts to methane burnoff to any number of other techniques. The end result is that you are, in effect, "carbon neutral". It's less ideal than reducing your greenhouse gas emissions in the first place but it can reduce your impact.

It's also far, far cheaper than you would expect. I chose this year to offset my travel with credits from a company called TerraPass, as my company Palantir.net has worked with them before. (The carbon offset industry is too new to be regulated, so be careful of scams.) The cost of offsetting 1000 lbs of carbon dioxide? $5.95 USD. That's it. Less than breakfast at Starbucks. That means offsetting all of my air travel in 2014 is a mere $130. Adding in my home energy usage and driving brought the total up to about $260. That's it. You probably spent more than that on your phone.

So I did. And you should too.

Your turn

I know many of my readers are frequent conference travelers and speakers. Many of them cross an ocean much more than I do. Friends, that means you're churning out just as much greenhouse gas pollution as I am, if not more. It's ridiculously cheap to compensate, and only takes a few minutes.

My challenge to you then is this: Offset yourself. I'm not going to tell people to stop going to conferences (that would be rather hypocritical), but I am going to call on everyone who attended or spoke at a conference to, at least, buy offsets for their air travel if not their full carbon footprint. You spent more on the ticket than you will on the offsets. (If you were a speaker and got your ticket free, you spent more on the cab from the airport.) If you can afford to attend a conference, you can afford a latte grande's worth of carbon offsets.

No, it won't cure the world, but every little bit helps. And if we can make it a trend and an expectation, especially for we frequent flyers, it can have a larger impact.

Conference organizers, you too

I know of only one conference that offered attendees the option of purchasing carbon offsets at registration, and that was DrupalCon Chicago 2011. Conference Organizers: Let's make it easier for people to go neutral for your conference.

Partner with some reputable carbon offset company, give people a calculator for their travel distance, and let them buy offsets along with their ticket, T-shirt, and whatever else. Make it optional, sure. (Opt-out would be nice, but possibly not feasible without some default travel distance for the calculation.) But put it there in people's faces. For most people it will cost less than the T-shirt.

Make that your 2015 resolution: At least make your business travel carbon-neutral. It's cheaper than a gym membership and much easier to stick with. And you don't even have to break a sweat.

Even if you've never heard of Disqus before, you've almost certainly seen it. With slick social integration, support for a variety of platforms (including WordPress, Blogger, Tumblr, and Drupal) and a rapidly expanding user base, Disqus has become one of the most popular hosting services for website comments. If you've had misgivings about the commenting system you're using – or just envied the functionality and polish of other systems – you may get to a point where you decide (as we did for our Forum One blogs) to make the switch to Disqus.

Making this switch is fairly straightforward – but what about all those wonderful comments that your blogs have earned over the years? You don't want to lose all the valuable discussion, tips and virtual high fives from your online fans, do you? Probably not. So if you're using Drupal and looking to upgrade to Disqus from a system like Mollom, there are various resources that already exist for manually migrating those comments, but nothing explains the process from A-Z, so I've compiled this step-by-step guide to show you how to manually do the job.

There are two primary ways to migrate comments: (1) Using the API to automate the whole process by directly connecting to Disqus from Drupal and move the comments for you (which isn't working at the moment); or (2) Using the Disqus XML export tool to migrate comments from Drupal in XML format which you can then import manually into Disqus.

Versions

Disqus module (Do NOT use 6.x-1.9 which was the stable version at the time of writing this. In normal Drupal fashion the Dev version is the one to use which is 6.x-1.x-dev released Nov 22, 2013 which contains the extra features needed to do this.)

This was done in Drupal 6 though D7 is very similar

Step 1 - Module Enabled

Enable both Disqus and Disqus Migrate modules (Disqus Migrate provides migration tools for moving comments between Disqus and the Drupal comment system). Disqus has a great reference for this.

Note: There are additional steps to enable Disqus (ie. downloading and installing the API library, adding your Disqus keys and verifying connection with Disqus, as well as actual Drupal access permissions) which are well documented in the list of references I've provided at the foot of this article. Since those steps are primarily used for setting up the module rather than carrying out the actual migration, I will not review those steps here.

Step 2 - Export Drupal Comments

Navigate to the Disqus export page /admin/content/comment/disqus_export which contains two buttons, Export Awaiting Comments Using API (which doesn't work as of writing this) and Export All Comments to XML File which is the one you should click.

Select Export All Comments to XML File (This option is explained as Exporting via XML will just gather all of your websites comments and format them for importing manually into Disqus)

Save the file to your local file system.

At this point you will have all the comments from your site in XML format which we will not import into Disqus.

Upon completing this, the comments will be imported to Disqus and you might want to do some house cleaning of those comments using the Disqus Admin interface tools.

Step 4 - Optional, but pretty much needed

After the comments are imported you may notice that some comments that exist on your Drupal nodes are not being displayed in Disqus. This is most likely caused by URL aliases. Since Disqus maps comments to nodes by path, this means that example.com/node/32 and example.com/how-to-play-ball may be the same node, but Disqus doesn't know that – so the result is that those comments will only show for the URL that was in the export XML file. What we need to do now is provide Disqus URL mapping.

In Disqus navigate to Admin -> Discussions (Tab) -> Tools (Tab)

Select Start URL Mapper. This takes you to a page that will give you a list of comment paths Disqus knows so you can provide manual path mapping information OR you can have Drupal create a mapping file for you. I chose to do this using a custom script.This page shows you the format it expects the CSV format (comma separated value) file to be in:

Follow the link to download the CSV file from Disqus to your local file system and name it disqus-urls.csv.

Copy this CSV file to your Drupal root directory.

Create a file with the following PHP code in that same Drupal directory called disqus-custom-script.php

Browse to the PHP file (or execute your preferred way so you can see echo output). Browsing to it with yoursite.com/disqus-custom-script.php assuming you named your PHP file that and saved it in the root directory as asked.This will give you output that can be saved into a CSV file and will be in the exact format that Disqus expects.

Make sure you give it a few minutes to complete, as the mapping in Disqus isn't instantaneous, but you should start seeing your comments appear where they should for the URLs that were in the mapping CSV file.

That's it! Below I've provided a list of references that you can use when following these steps. If this guide is helpful for you, or if you discover a different solution, please leave us a comment ;)

Bootstrap is a great Drupal theme that makes it so your form elements and other Drupal things get output with proper Twitter Bootstrap CSS attributes. One downside to it is that the popular Webform module has elements that don't get styled by the Bootstrap Drupal theme and then they look like unstyled form fields.

To fix it, go to Bootstrap's theme/process.inc file. Inside it, add 'webform_email' to the 'types' array in the _bootstrap_process_input() function. This will style Webform's special email field. Other fields likely have different types. The reason it doesn't get styled is because the 'types' array is coded to look for only default types and not the special ones that Webform is using.

If you want to see what the #type is on an element, I recommend installing the Devel module and calling "dpm($element);" inside the theme/alter.inc bootstrap_element_info_alter() function. This will output all of the elements on your current Webform.

Although the Drupal Kaltura module (6.x-2.0-beta2) is great for most video play, it doesn’t always play nice with Organic Groups. Recently I was tasked with using the module on an existing Drupal 6 Commons site. The problem was that Kaltura media nodes do not automatically inherit the group association, when created in a group context. In other words; when there is a GET parameter for a group id (gid) present in the node add page for kaltura_entry content types, Kaltura content would not be associated with the group like other group post content types. This is because the module relies on notifications from the Kaltura service before a node is actually created, at which point, the parameter has no relevance. So all we needed to do was pass some data to Kaltura and have Kaltura pass it right back to us.

After some extensive research, I finally came across this documentation page. The section "Passing Partner Data to the KCW Widget" (KCW = Kaltura Contribution Wizard) explains that we do have a partnerData key in the variables passed to the Contribution Wizard widget and then on to Kaltura. In fact, the module already does something similar to pass the user ID. All we had to do was plug the gid in there then handle the returned data. This does require a bit of patching to the contributed module.

First, plugging in our group id. This required adding the following to line 47 of kaltura/kaltura_client/kaltura_helpers.php

Note that the values are delimited by pipes, and paired with an @. Next, we had to figure out how to handle the returned data. This required modifying the kaltura_notify_node_entry_add() function in kaltura/includes/kaltura.notification.inc to assign the node to a group during node construction. I was not entirely sure how to do this off the top of my head, so I relied on the suggestions in this Drupal stack exchange post and dropped the following code in around line 206 right before node_save($node).

As always, when modifying a contributed module, I saved these changes in a patch file in the repository so they can be tracked and re-applied in the event that we upgrade the module later (if the patch is not already applied by then).

Sites using shared databases for tables like shared users are great for sharing users between sites. It works for many other entities as well. The tricky part is using Solr indexing on this content.

The way Solr indexes things is when a new entity that should be indexed is added it is marked for Solr indexing as well. This works great but it's limitation is that the Solr index itself is stored in the current site's database. This means that if you have Site A and Site B and Site A adds a user (to the shared user table that Site B also uses), site A will mark it to be indexed by Solr. In turn Solr will add it to Site A's Solr index, but Site B still misses the Solr index marking (since as far as site B is concerned it now always existed.) To solve this problem I created a Cron job hook in a module that is used on both sites that scans the Solr index for items of a desired type and then compares them to the actual shared table to see if any are missing in the Solr index. If items are found that are not in the Solr index, they are added.

This is beneficial for many reasons. For example if you have views that use a Solr index for content you want to make sure everything in your shared table is in this index. In my case user searches in site A were missing users that were created on on Site B.

Why do a Cron hook (instead of hook_node_insert)? You need to make sure the other site is able to find the entities that it didn't create. Cron is great since it runs on both sites no matter what and checks. Insert response hooks will only work on the current site being used to add the user, but the other site is unaware even if they both share that same module. Site B will not respond to a node insert done on Site A with an insert hook.

// Ensure that users are indexed correctly, regardless of which site they were created on.
// Since users can be created on two sites, search_api_entity_insert($entity, $type) might
// not be called, which results in an incomplete user index.
function MYMODULE_cron() {
// Make sure search_api module exists
if (module_exists('search_api')) {
// I am searching for users so this is my type. This could be a different type if
// not being used for users
$user_type = 'user';
$conditions = array(
'enabled' => 1,
'item_type' => $user_type,
'read_only' => 0,
);
// Each type in the search_api index has a different ID. That ID can be different
// even between sites that share the same item type so it is important to do this
// to get the Solr ID of the type of entity you are loading for use in the following
// steps. We of course will only get one index back.
$indexes = search_api_index_load_multiple(FALSE, $conditions);
if (!$indexes) {
return;
}
// For the one Solr ID index found
foreach ($indexes as $index) {
// User type exists
// Select all users from the shared users table that are not present in the search
// index on this local site.
// Note that the users table is set to use the shared database:table in
// settings.php of both sites.
// The following drupal query essentially does this sql query:
// select u.uid from SHARE_db.users as u
// where uid not in
// (select item_id from THISSITE_db.search_api_item
// where index_id = ThisSolrUserTypeID);
$query = db_select('users', 'shared_u');
$query->addField('shared_u', 'uid');
$query_exists = db_select('search_api_item', 'indexed_users');
$query_exists->condition('indexed_users.index_id', $index->id, '=');
$query_exists->addExpression('NULL');
$query_exists->where("indexed_users.item_id=shared_u.uid");
$query->notExists($query_exists);
$results = $query->execute();
// Great idea to add a log that users are found that are about to be indexed since
// this is running in cron. That way if something goes wrong we can at least see
// that this started.
if ($results) {
watchdog('MYMODULE', 'Adding shared users to search index');
}
// $results now holds all the users that need to be indexed because they were not
// found in the current site's Solr index.
foreach ($results as $result) {
// The user record doesn't appear to have been indexed, so call
// search api's search_api_entity_insert directly and make it so.
search_api_entity_insert(user_load($result->uid), $user_type);
// You can un-comment the following line if you want to see each entity you are
// adding to be indexed by Solr.
//watchdog('MYMODULE', 'Adding user ' . $result->uid);
}
}
}
}

As a follow up note this only handles entity adds. If an entity is removed a similar function needs to be created to un-index the entity as well. You can use search_api_entity_delete.

If there is interest or if it would be helpful we could put together a Birds of a Feather at Drupalcon Portland (2013) for this. Let us know!

As Acting-CEO of ImageX (self-appointed while Glenn Hilton is on vacation with his family), I am pleased to announce our latest recruiting and professional development initiative. Starting immediately, ImageX Media will begin working with interested families across North America to train children under the age of two to develop web solutions of uncompromising quality using Drupal. This program, known as the OpenBaby Initiative, is believed to be a first of its kind in the web development community and has been received enthusiastically by the IXM staff.

“Most developers are babies anyway,” said Patrick Jones, Senior Project Manager. “At least with the OpenBaby Initiative I’ll be able to give underperforming devs a time out.”

HR Coordinator Gabrielle Garon, who has a nephew and therefore totally understands about children, was initially hesitant about the program but came to understand the value. “At first I was worried about the labour implications (no pun intended), but when it became clear we weren’t going to pay the babies, all my concerns went away.”

￼According to highly ranked Google search results, child develop experts such as mcKeNNazMomMY107 agree that younger children learn the foundations of new languages easier than adults. “We’re hoping this means one of them can figure out the Drupal 8 API,” exclaimed Technical Lead/Senior Nanny Shea McKinney.

The initiative is expected to have significant impact on all aspects of ImageX’s business.
“I don’t see why we would keep focusing on higher education,” offered Marketing Coordinator Brett Burns, “when we’ll have immediate inroads into the lucrative diaper and burp cloth verticals.”

Participants in the OpenBaby Initiative are expected to reach Senior Developer level by age 5, at which point they will unfortunately need to be laid off so they can attend kindergarten. Negotiations are underway with Acquia for all OBI severance packages to include a co-branded Dries Buytaert teddy bear.

At last year's Drupalcon in Denver there was an excellent session called Delivering Drupal. It had to do with the oftentimes painful process of deploying a website to web servers. This was a huge deep dive session that went into the vast underbelly of devops and production server deployment. There were a ton of great nuggets and I recommend watching the session recording for serious web developers.

The most effective takeway for me was the manipulation of the settings files for your Drupal site, which was only briefly covered but not demonstrated. The seed of this idea that Sam Boyer presented got me wondering about how to streamline my site deployment with Git. I was using Git for my Drupal sites, but not effectively for easy site deployment. Here are the details of what I changed with new sites that I build. This can be applied to Wordpress as well, which I'll demonstrate after Drupal.

Why would I want to do this?

When you push your site to production you won't have to update a database connection string after the first time. When you develop locally you won't have to update database connections, either.

Streamlining settings files in Drupal

Drupal has the following settings file for your site:

sites/yourdomain.com/settings.php

This becomes a read only file when your site is set up and is difficult to edit. It's a pain editing it to run a local site for development. Not to mention if you include it in your git repository, it's flagged as modified when you change it locally.

This will put settings.php and settings.production.php under version control, while your local settings.local.php file is not. With this in place, remove the $databases array from settings.php. At the bottom of settings.php, insert the following:

This code tells Drupal to include the local settings file if it exists, and if it doesn't it will include the production settings file. Since settings.local.php is not in Git, when you push your code to production you won't have to mess with the settings file at all. Your next step is to populate the settings.local.php and settings.production.php files with your database configuration. Here's my settings.local.php with database credentials obscured. The production file looks identical but with the production database server defined:

Streamlining settings files in Wordpress

Wordpress has a similar process to Drupal, but the settings files are a bit different. The config file for Wordpress is the following in site root:

wp-config.php

Go ahead and create two new files:

wp-config.local.php
?wp-config.production.php

Add the following to your .gitinore file in the site root:

wp-config.local.php

This will make it so wp-config.php and wp-config.production.php are under version control when you create your Git repository, but wp-config.local.php is not. The local config will not be present when you push your site to production. Next, open the Wordpress wp-config.php and remove the defined DB_NAME, DB_USER, DB_PASSWORD, DB_HOST, DB_CHARSET, and DB_COLLATE variables. Insert the following in their place:

This code tells Wordpress to include the local settings file if it exists, and if it doesn't it will include the production settings file. Your next step is to populate the wp-config.local.php and wp-config.production.php files with your database configuration. Here's my wp-config.local.php with database credentials obscured. The production file looks identical but with the production database server defined:

<?php// ** MySQL settings - You can get this info from your web host ** ///** The name of the database for WordPress */define('DB_NAME','db_name');/** MySQL database username */define('DB_USER','db_user');/** MySQL database password */define('DB_PASSWORD','db_user_password');/** MySQL hostname */define('DB_HOST','localhost');/** Database Charset to use in creating database tables. */define('DB_CHARSET','utf8');/** The Database Collate type. Don't change this if in doubt. */define('DB_COLLATE','');

What's next?

Now that you're all set up to deploy easily to production with Git and Wordpress or Drupal, the next step is to actually get your database updated from local to production. This is a topic for another post, but I've created my own set of Unix shell scripts to simplify this task greatly. If you're ambitious, go grab my MySQL Loaders scripts that I've put on Github.

Drupal 8, originally scheduled for an August 2013 release, will from all appearances not just be another version upgrade. There will be extensive improvements on issues that matter to all types of Drupal users. That last sentence doesn't do it justice. Really Drupal 8 will be a quantum leap among Content Management Systems and Web-Application Frameworks.

Who will Drupal 8 benefit the most, users or developers? This is hard to quantify, but so far it seems that the end user will feel the biggest shift. The most dramatic changes for end users will be a simplified interface for content modification, and improved mobile compatibility. But these are not the only enhancements that are underway for what is undoubtedly the most ambitious Drupal version to date.

If you can post to Facebook, You can post to Drupal 8

Posting content will be as easy as it is on popular social networking sites. If you can post to Facebook, you will be able to post to Drupal without any additional training. The usability for site managers is also markedly improved. This is all due mainly to the Spark distribution work which allows in-place editing, see http://drupal.org/project/spark. The goal is that content creators, site managers and end users will have the option to just click what they want to edit on a page, like the title, text, or images and change them directly without having to switch to an administrative editing interface. I know that end users have instinctively tried to edit content just by clicking on blocks of text when given Drupal without any training. This update will make the process of seeing what your changes look like as you compose feel entirely natural.

Mobile

Drupal 8 is setup for mobile in multiple ways. The new Drupal is being built so that from the moment of first use, you will be able to interact with your site on both traditional and mobile displays. Additionally, work is underway towards “responsive layouts” which allow site creators to place regions of text, graphics and other elements so that everything appears readable on mobile devices and your laptop, auto adjusting size and orientation to whatever you are using at the time. Mobile apps will also be able to tie into Drupal 8.

Say you feel like logging into your Drupal site and checking on new comment activity, but you only have your mobile phone. With Drupal 8 you'll be able to do that with an interface that works well with your mobile device; no scrolling around and trying to enlarge text. While much of this is possible with Drupal 7 with extra setup beforehand, we're going to see this become the standard on Drupal 8.

Major work is going towards performance improvements in Drupal 8, we'll be blogging later to explain how. To generate pages suitable for a variety of devices it is important for Drupal 8 to be quick, and major progress is already underway to enhance speed, mainly on the “front-end,” and that means on your end-user device.

Lastly, efforts are being made to include back-end wizardry which allows custom apps to connect to the Drupal database in standardized ways using new and improved Web-Services. Web Services are how different computer devices communicate with each other over the Internet. When you visit somewhere else in the world it is good to speak the language, in this case the computer's language. Improved Web Services allow your Drupal 8 site to communicate better with the world, that is other applications, be they mobile or most anything else which speaks these standardized data languages.

Other Initiatives

The other main initiatives, overlooking all the many tweaks and interface improvements are: multilingual, design, and configuration management (and the Views module group is in core).

If you have a multilingual site, or more to the point, want a multilingual site, Drupal 8 now includes the language systems in core. So adding languages and translations is more like installing or updating modules.

Designers will also see big changes with the way themes are made using Drupal 8, and given the mobile initiatives this is imperative. The goal here is to make design (theming) work better. The end result is cleaner and more elegant web design.

There will also be improvements in configuration management. When creating sites, most developers have multiple installations of the site, development, staging and production, or minimally development and production. In Drupal 8 configuration management makes it easier and methodical to maintain these separate installations while simplifying deployment of new code, updates and alterations. Besides the time saved for developers, these procedural improvements will benefit site owners because their site can be better maintained, more stable, and more secure.

Recently the Views module has been added to core. If you don't know what Views is in Drupal, suffice it to say that now, with a default Drupal 8 installation, site-builders will be able to make complex web applications, similar to many of the popular ones we know and love, like Twitter for instance, without adding additional external modules (of course you may end up adding just a few of the thousands of available modules to add some cool functionality). Yep, it's excellent.

That's a long list so far and that's just the beginning. Drupal 8 has even more in store for all of us due the large and growing community of ambitious and hard working contributors.

This blog entry is based on an informal presentation and post-discussions about Drupal 8 given by Darrell Ulm at Drupal Camp Ohio 2012.

Features is a popular module that’s been circulating around in the Drupalsphere for over three years. For those unfamiliar, it creates a module out of a set of configurations and entities from a Drupal site, capturing it in code, therefore allowing you to easily recreate the same configurations and entities on another site (or the same site for that matter). A quick search will turn up multiple tutorials on how to get started with site building using Features, such as this very nice three part video tutorial by Code Karate, so we won’t elaborate too much here. For those familair with using Features, I highly suggest reading Lullabot’s post on the backstory behind Features as it illustrates very well the disconnect between the original vision of the module and the now common usecase of deployment across sites.

However, as an avid programmer, Features appeals to me in another way. Features as a site building tool can allow for bypassing Drupal’s intuitive UI and interact with the structure of the site through a different facet. Below I will illustrate two examples of how we have taken advantage of Features’ site building capability.

Multilingual static content entry on a panel

In this situation I needed to place multiple panes of static content on a panel. To place the first language-aware pane we follow the following procedures:
1. click “add content” at desired region
2. click “New custom content”
3. input desired content, via copy-paste or otherwise
4. click the gear to open up the contextual menu for the pane we’ve just created
5. click “add a new rule” under the visibility section
6. choose “User: language” and click next
7. select the appropriate language, English in this case, and click save

As we can see from the above workflow, that’s quite a bit of clicking around and waiting for the ajax to load. Now if we have 6 languages, we’d have to repeat the above 5 more times.

Now if we have this panel nicely packaged in a feature, instead of adding the next pane through the UI, let’s open up the file that corresponds to the panel in the feature. For a panel page it will be named “name_of_feature.pages_default.inc”.

Now the code of interest to us is from line 62 to 104 in the above code block. This block of code here represents the first pane we’ve created. We can use it as a template to build our next pane. We simply make a copy of the code right below and modify as needed. First we make sure to change the pid (pane id) as to not collide with the existing pane. By default the export function for panels auto-increments the pid from “new-1” onward so the next logical pid would be “new-2”, however you may assign any pid you like here. As seen in the modified end result code block below, we’ve changed the pid at line 106, 146, and 147. We also need to change the array index on line 147 from "[‘middle’][0]" to "[‘middle’][1]". Once again the index we’ve changed does not make a difference so long as it does not collide. The ordering of the pane actually derives from the order from which the code is processed.

Now that we’ve taken care of the collisions, we can start changing the properties of interest. The access property of the $pane object starting at line 111 appears to correspond to the language visibility rule we’ve set for the first pane, so let’s adjust it.

Let’s say you are working on a project that needs a dozen or so different image styles dimensions. It can be quite a chore to manually create these, so let’s see how we can leverage Feature to expedite this process.

Here’s the generated code from the features.inc file for an image style with the scale and crop effect set to 300×300.

// Exported image style: scale_crop_300x300.$styles['scale_crop_300x300']=array('name'=>'scale_crop_300x300','effects'=>array(1=>array('label'=>'Scale and crop','help'=>'Scale and crop will maintain the aspect-ratio of the original image, then crop the larger dimension. This is most useful for creating perfectly square thumbnails without stretching the image.','effect callback'=>'image_scale_and_crop_effect','dimensions callback'=>'image_resize_dimensions','form callback'=>'image_resize_form','summary theme'=>'image_resize_summary','module'=>'image','name'=>'image_scale_and_crop','data'=>array('width'=>'300','height'=>'300',),'weight'=>'1',),),);

return$styles;}

Now here’s the code for an additional image style with also the scale and crop action but set to 400×400.

// Exported image style: scale_crop_300x300.$styles['scale_crop_300x300']=array('name'=>'scale_crop_300x300','effects'=>array(1=>array(//line 17'label'=>'Scale and crop','help'=>'Scale and crop will maintain the aspect-ratio of the original image, then crop the larger dimension. This is most useful for creating perfectly square thumbnails without stretching the image.','effect callback'=>'image_scale_and_crop_effect','dimensions callback'=>'image_resize_dimensions','form callback'=>'image_resize_form','summary theme'=>'image_resize_summary','module'=>'image','name'=>'image_scale_and_crop','data'=>array('width'=>'300','height'=>'300',),'weight'=>'1',),),);

// Exported image style: scale_crop_400x400.$styles['scale_crop_400x400']=array('name'=>'scale_crop_400x400','effects'=>array(2=>array(//line 39'label'=>'Scale and crop','help'=>'Scale and crop will maintain the aspect-ratio of the original image, then crop the larger dimension. This is most useful for creating perfectly square thumbnails without stretching the image.','effect callback'=>'image_scale_and_crop_effect','dimensions callback'=>'image_resize_dimensions','form callback'=>'image_resize_form','summary theme'=>'image_resize_summary','module'=>'image','name'=>'image_scale_and_crop','data'=>array('width'=>'400','height'=>'400',),'weight'=>'1',),),);

return$styles;}

We can see that the export code for each image style is nice and chunkified which makes it very easy for us to copy and paste to additional image styles with varying dimensions. We just need to watch out for any incrementing index such as on line 17 and 39 and make sure we increase accordingly for each image style we add.

These are just a small taste of the level of manipulation that can be done to Features’ exports, and by proxy, to a Drupal site’s structure. As you get more familiar with each module’s Feature exportables you will find more and more ways to directly get at the settings you want and bypass the UI. This approach is one that I find myself turning to more and more as it has saved me many work hours.

What's a site worth if you can't ambiguously tell authors how terrible their articles are by rating them on a scale of 1 to 5 chili peppers? How can we segregate against nodes by only allowing the popular nodes to sit at the cool nodes table? Well lucky for us we're going back to high school in this blog post as I introduce you to a few cool Drupal modules that show us how to get organic results for popular content.

Fivestar

is one of the more basic, but useful rating modules that adds “rating stars” to content. Users rate the node based on the number of stars you made available, then you can view the average rating of the node to see how popular it is. Simple right? Fivestar is based on, is officially supported in Acquia Drupal, integrates with Views, and is easily the quickest way to add ratings to nodes.

Want to extend Fivestar? Check out. Want your own custom Fivestar widgets? Drupal Ace can show you how in their.

Rate

is pretty similar to Fivestar in the voting aspect, but it also allows for thumbs up/down, emotion ratings (funny, mad, angry), etc. It’s like Fivestar and all in one, plus a bit more. Rate also has some interesting features, such as Rate Expiration, which disallows voting after a set period of time (for that one guy who wants to down vote an article... 3 years after it was posted). Rate doesn’t allow users to cancel their vote, whereas Fivestar does. Some will bicker back and forth about which one is better, but we’ll leave that to the drama nerds and their never ending debate between Star Wars and Star Trek... Which by the way, a Star Destroyer could take on the Enterprise any day. Just ask.

Want to add some visualizations? Mix it up and integrate with the to show off those sexy bar charts. I can hear the ladies running already.

User Points

If you want to add a bit of narcissism to your site, look no further than. Okay I kid, healthy competition between users might be a better phrase. User Points allows users to gain or lose points by performing different actions on your site. This may be writing a product review or commenting on a node. With this module, the users are fighting for popularity instead of your content. User Points ties into Services, Rules, and Views which makes it even more site builder friendly.

If you’d like to extend User Points, check out this to add even more functionality.

Radioactivity

If this list of modules were your high school homecoming court, then it’s time to meet your King (or Queen). We’ve met the runner-ups, but steals the show. This module will give you the most organic results for content based on popularity. Radioactivity provides more of a hotness meter. When an entity is getting attention (either by views or actions defined by rules), it will become hotter by an increasing energy that you set, while those that are not receiving attention are cooling down. Pretty cool, huh... er, I mean hot? You get the point.

So why the name Radioactivity? The cool down rates are based on the concept of half life, or the amount of time it takes for the quantity of something to reach half of the original value. Using Radioactivity, you create a decay profile that sets the cool down rate for the entities it is assigned. Want to know the current trending articles on your site? Set the half life to 3 hours and the granularity to 15 minutes (which is the interval of time to update the energy), and watch as the popular articles float to the top while the not-so-hots sink to the bottom in real-time. Have an ecommerce site? Integrate Radioactivity with Commerce using .

Of course there are a number of settings you can use in the module, such as using memcache for storing energy values, so it’d be nice if you had some direction. Though on the project page there’s a few links for tutorials or documentation, I think that Teemu Merikoski of Wunderkraut has an for Radioactivity.

Know of any interesting modules that help showcase popular content on your site? Let us know in the comments below.

If you have ever worked with views and columns in Drupal, you may have had the unfortunate and rather difficult task of getting the columns to render out just the way you really want them to. After some research, I believe I've found a workable approach.

The most common approach is probably cutting the row widths in half and then floating them left. This gives the "illusion" that they are actually in "columns". This works great, if you want the view's row results to read left to right. However, I recently came upon the need to have the view rows to stack on top of each other and then overflow into the second column.

Many of you might be thinking, "Why didn't he just use a Grid style plugin in Views? That is, after all, what it's there for!"

Answer: Responsiveness!

Grids render in table markup. Anyone that has ever tried theming a table to be responsive knows it's a lot of work to override default table positioning behaviors.

There had to be a better approach in getting these columns to actually function the way I needed them. After doing a quick Google search, I came across the the Views Column Class module. Great! This looked very promising! I started to mess around with it and quickly realized that it added classes in a custom view style. Yes, this could work. However, it's still the same technique that we use for creating left to right "faux columns" - relying on classes to float left or right. What I really needed was to actually alter the markup that was generated. I needed to separate the rows into columns that have their own

markup. This would allow for easier manipulation and styling of the actual columns. I started searching some more.

I finally came across this wonderful blog by Amanda Luker: Flowing a list view into two columns. Granted it's a little over a year old, but it definitely got me on the right path! The technique is rather solid, but I'm not really a fan of using preg_replace(). For performance reasons, it's better to generate the correct markup in the first place. No need to generate markup only to replace it later. This is Drupal after all!

Not only that, I also needed to accomplish the following:

Easier way to manage which views get processed and converted into columns

Standardized view classing for the columns (including zebra striping and first/last classes). This is very useful for theming purposes.

Dynamic Columns. Have the ability to produce any number of columns, more than just two.

In the end, I felt just doing some rewriting was in order. In the end, this approach helps with all the previously mentioned tasks at hand. Below are the source code files (in Gist) needed to start preprocessing Views Columns:

template.php - Contains the preprocess hook needed for your theme. I figured preprocessing an unformatted list would probably be the easiest.

Drupalcons have always been the big ticket events in the Drupal community. There are typically two Drupalcons per year, one in North America, and one in Europe. Drupal Camps on the other hand, have always been somewhat less of an attraction for larger audiences, as they tend to have a limited global reach. But times are changing, and as Drupal’s reach expands and their user group grows, the Camps are becoming bigger and bigger each year. BADCamp 2012 was a well oiled machine. There were 1652 registered attendees, a couple hundred more registrants than last years BADCamp, and only a couple hundred shy of the 2012 Drupalcon in Munich. With the increase in attendance and the addition of new summits, it looks like BADCamp is slowly starting to bridge the Drupalcon gap.

The main reason why this years BADCamp was such a success was the addition of five new pre-conference summits, giving them 8 in total. These summits allow for likeminded individuals, business owners, and prospects to collaborate, learn, discuss, and share on topics pertaining to their specific interests and industry. With the summits, individuals have access to a full day of targeted learning as opposed to a brief 45 minute presentation during the weekend. No longer is BADCamp just a weekend event, it’s now a four-day learning experience. The summits from this year were: Mobile Drupal Summit, Drupal Product Summit, Drupal Business Owners Summit, Drupal Devops Summit, Drupal UI / UX Summit, Core Developer Summit, Higher Education Summit, and Drupal Non-Profit Summit. This year we saw BADCamp solidify its spot as a community leader while effectively scaling and making it a must-do event for all Drupalers.

ImageX Media was a sponsor for the Drupal Business Owners Summit. Glenn Hilton, the CEO, spoke on How to Recruit and Retain Top Talent. His discussion covered topics like, where to scout talent, how to implement strategies to retain personnel, and retention tools.

ImageX Media also sponsored the Higher Education Summit where Kristin Boden-MacKay from Portland State University presented on her web services implementation strategies and how OpenEDU has helped PSU reach their online goals and objectives. Kristin had a very compelling story and history that piqued the interests of the Higher Education audience. She drew on examples of ImageX Media’s custom built simple content syndication system and the rapid site deployment feature that are both currently running on pdx.edu.

Though there were many exciting presentations at Badcamp this year, probably two of the most popular with the ImageX Media team was the great introduction to Vagrant and what it can do to simplify & enhance developer processes by Craig McEldowney. Also, the really interesting insight into what enterprise-scale architecture looks like and what needs to be considered in maintaining such architecture by Barry Jaspan talking about Acquia Cloud’s infrastructure testing.

As the state of the community constantly changes and grows while adapting to the demands of the market, it is great to see the individuals and businesses that constitute the community continue to push the envelope and make events like BADCamp 2012 the best Camp yet.

Frustration has motivated me to write a post about this. These are largely a lot of issues I have with Pantheon. I will address the good things first.

The good bits (Some of which you may already know)

Innovative: I feel Pantheon is doing a great thing. They're helping Drupal developers simplify their workflow. They've created a service that stands above most other hosting services that currently exist. Additionally, Pantheon exudes the culture of listening to the Drupal community and continually trying to improve their product.

User Interface: For the most part, the user interface is pretty good. You're able to find things that you need without too much searching. It's not perfect and it could be improved.

Multi-tenet deployment is cheap: Being able to deploy a multi-tenet deployment structure is very easy and takes little time and effort. It's just a couple clicks away. You upload your database and then upload your files folder and after you get your favorite warm morning beverage you have a new Drupal installation created.

Upstream updates: Being able to run updates for Drupal core is awesome. A single click allows you to update code quickly without having to click through the interface. Although, drush is equally sufficient for this task.

Quick backup: With one click you're able to execute a complete backup. The backup is also conveniently compressed for quick download.

The bad bits

Pulling in changes: Moving from dev/staging/live requires clicking. I may be in the minority but I really enjoy using git purely through the command line interface. Having to pull in changes in your testing environment by clicking can get old quickly.

Drush / SSH-less access: While some innovative folks in the contributed module space have created a Drupal project for drush access, it still limits you from what you can do. I understand the limitations exist due to large concerns of security. Without Drupal or ssh access, it can often be a burden for Drupal developers. I would much prefer to have the ability to sync databases and files a command line interface with Drupal. I know Pantheon does a great job of creating user interfaces to replace this but being able to `drush cc all` is superior in my opinion.

Working the Pantheon way: Using Pantheon, you're tied to a specific workflow. This includes exporting the database, downloading it from the browser and then importing it into your machine. This is OK the first 10 times. After a while it gets quite old. I would much rather use `drush sql-sync` for this.

New Apollo Interface: The new Apollo interface has too many tabs and fancy dropdowns. Changing the environment now requires clicking twice. Click the dropdown, then pick your environment, then pick the tab on the left side. Someone went a little crazy with Twitter bootstrap. I would rather see a longer page. The tabs/dropdowns often abstract where and what you need. Also, you have to re-learn another new workflow for it to work. This is a slight curveball.

503 Errors: This issue was of the most problematic. On one of our website setups, it produces an unhelpful 503 error every time you would visit the features page or to clear cache. This became instantly an impediment on our team's productivity. We've posted a ticket; however, the ticket process has been rather slow. Different techs have come in, passed the issue and escalated it each time; but we have yet to have a resolution. (We're on day 7 of this problem.) Being able to call and wait for an hour or two and getting it resolved then would be more efficient of my time; especially when something like this is becoming an impediment to our project.

In the end it's up to you

Overall, it all depends on the workflow and tastes of the Drupal developer. Pick and choose what works for you. For some people, Pantheon is the right service / tool for their job. For me, I would much rather prefer more granularity and control. I really desire for Pantheon to succeed. Pantheon is fulfilling a need that exists in the Drupal community. Hopefully, they'll continue to improve their product and I'll give them another shot later on. At the moment, it's not what I'm looking for.

For Drupal 8, we want to bake REST support directly into the core system. It's unclear if we'll be able to go full-on hypermedia by the time we ship, but it should be possible to add via contributed modules. For the base system, though, we want to at least follow REST/HTTP semantics properly.

One area we have questions about is PUT, in particular the details of its idempotence requirements. For that reason, I'm reaching out to the Interwebs to see what the consensus is. Details below.

For now, we're confining ourselves to RESTful access to entites, Drupal's main data object. Every entity has a "native" URI at http://www.example.com/$entity_type/$entity_id, such as /node/5. We're currently looking at JSON-LD as our primary supported serialization format.

Naturally for creating a new entity, we cannot use PUT since the entity ID is auto-generated. For that, POST to a /node/add page of some sort, which returns the URI of the created node. But what of updates?

Idempotence

At first blush, PUT /node/5 seems like an obvious thing to do. Simply PUT a JSON-LD representation of a node to an existing URI and it gets overwritten with the new version, no muss no fuss. The problem is that Drupal, being a highly extensible system, cannot always guarantee no-side-effects when that happens.

My understanding is that idempotence in HTTP is not absolute. For instance, GET, HEAD, and PUT are idempotent, but "incidental" side effects such as logging or statistics gathering are OK and not a violation of their idempotence. RFC 2616 has this to say on idempotence:

Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. (RFC 2616 section 9.1.2)

I'm not clear on "the side effects are the same" qualifier. Does that mean it can be a repeat of the same side effect, or a net-0 effect?

There are two places where this becomes relevant for Drupal.

Versioning

Many types of entity in Drupal (hopefully all soon) support revisioning. That is, when saving the entity instead of overwriting the existing one a new draft is created, which, sometimes but not always, becomes the new "default" version. Previous versions are available at their own URIs. That can change at any time, however, subject to user configuration. Also, more recently we've been allowing forward revisions, that is, creating a new version that is not yet the default version, but will be.

How does that play into idempotence and PUT? If a new revision is created, then repeating the PUT is not a no-op. Rather, it would create yet another revision. The spec says:

A single resource MAY be identified by many different URIs. For example, an article might have a URI for identifying "the current version" which is separate from the URI identifying each particular version. In this case, a PUT request on a general URI might result in several other URIs being defined by the origin server. (RFC 2616 section 9.6)

That seems to imply that a PUT to create a new revision is OK. However, what of forward revisions? If you create a new revision, but don't set it live, it means that a PUT followed by a GET on the same URI will not return the value that was PUT. It would return the previously existing value.

Put another way:

PUT /node/5

{title: "Hello world"}

Results in:

GET /node/5

{title: "Hello world"}

and

GET /node/5/revision/8

{title: "Hello world"}

And that's totally fine, by my read of the spec. However, what of:

PUT /node/5

{title: "Bonjour le monde"}

Results in:

GET /node/5

{title: "Hello world"}

GET /node/5/revision/8

{title: "Hello world"}

GET /node/5/revision/9

{title: "Bonjour le monde"}

Is that still spec-valid behavior? And if not, does that mean that any system that uses a Create-Read-Archive-Purge (CRAP) model instead of CRUD, or that supports forward revisioning, is inherently not-PUT-compatible? (That would be very sad, if so.)

Hooks

The other concern is Drupal's extensibility. When an entity is saved, various hooks/events fire that allow other modules to respond to the fact that the node has been saved. Those hooks can do, well, anything. While in the vast majority of cases they will do the exact same thing every time a new update is made or a revision is saved, that's not a guarantee. They may take a different action depending on the values that were just saved for the entity. Or they may take a different action on different days. Or they may generate IO, such as sending an email or saving additional database information, or triggering a cache clear, or launching a nuclear warhead. (Unlikely, but the API allows for it!)

Since those hooks MAY do things that are not idempotent, does that mean that we MAY NOT use PUT, since it must be idempotent? Or does it mean that we simply document that hooks SHOULD NOT do non-idempontent things and call it a day?

In any event, that's our situation. We want to properly leverage the HTTP spec and REST principles here, but I fear that Drupal's very extensibility makes that semantically impossible. I am hoping I'm wrong, but in any event I turn the question out to the peanut gallery for consideration.

After a little over 9 months and with an impressive 1290 sites reporting they use it, Menu Views has undergone a little nip and tuck! Today, I have finally released in hopes to squash the ever so annoying bugs and wonderful feature requests that were made! This module has been an invaluable tool in the mega-menu creation process. It has solved a problem for many people: how to insert a view into the Drupal menu system.

Many Drupal sites I've seen through out the years (those that have complex mega-menus) left me perplexed at how to accomplish this task. I could never really imagine it happening effectively, unless it was rendered using views. After being able to finally see how some of these sites actually accomplished this great feat, I was also a little baffled at the shear complexity of making it happen.

Often times, the theme is the unsuspecting and unfortunate victim. Being hacked, sliced and injected with arbitrary code to succeed in rendering the desired result. Some prefer to do it this way and to them I say "be my guest". However, when a more dynamic approach is needed, it is far better to utilize the awesome power of Drupal. Which begs me to reiterate what the purpose of a CMS is for: letting the system actually manage the content (hmm novel idea).

Eureka! Let Drupal's own menu system and administrative user interface handle this complex problem! When I first released Menu Views (1.x) that is what it solved: inserting a view into the menu. However, it also introduced quite a few other problems that were unforeseen and rather complicated to fix. Namely these involved other contributed modules and the biggest native one: breadcrumbs!

Over the past few months, I really started digging into the complexity that is the menu system in Drupal. Trying to figure out what exactly I could do to help simplify how the replacement of links were intercepted and rendered. After pouring over the core menu module and several contributed modules, I began noticing several commonalities in the best approach: theme_menu_link().

In Menu Views-1.x I was intercepting theme_link() instead. In hindsight, I can't believe how incredibly stupid that was! So essentially, the old method intercepted every link on the site with the off chance it might be a menu link that had Menu Views configuration in the link's options array. Whoa... major performance hit and a big no-no! For this reason alone, that is why I decided to do a full version bump on Menu Views. Part of this decision was to consider existing sites and how they may have already compensated for view were popping up everywhere. An additional deciding factor involved refactoring the entire administrative user interface experience.

In Menu Views (1.x), there seemed to be a lot of confusion around "attaching" a view and optionally hiding the link using <view> in the link path. I thought about this for a while. Ultimately I decided that the best way to solve this would to separate the view form from the link form and give the administrator the option to choose what type of menu item this should be. In this way, menu views can more accurately determine when to intercept the menu item and render either the link or a view.

There are now a couple options to better manage what the breadcrumb link actually outputs as well. Along with rendering a title outside of the view if desired. Both of which can use tokens! No longer are we left with stuff extraneous markup in the view's header. Last but not least, one feat of UI marvel: node add/edit forms can control menu views and you're no longer limited to just the menu overview form!

Overall, I think Menu View's new face lift will allow this module to reach a new level of maturity and stability that has been greatly needed. Thank you all for your wonderful suggestions in making this module a reality and truly a joy to code!

We are less than 24 hours away from our fourth annual Dallas Drupal Days conference, with a Drupal Business Summit on Friday and DrupalCamp on Saturday. If you need more motivation to be here ..

Hear Josh Koenig, fresh from interviewing Dries at the DrupalCon Munich keynote, talk about "The Drupal Destiny". With a name like that, it's got to be good.

Learn how McKessen, 15th on the Fortune 500, built their Patient Portal using nothing but Drupal and tongue depressors.

Find out 10 ways your Drupal site can get hacked. It just might be getting hacked RIGHT NOW!

Visit the Results Oriented Social Media Summit going on at the same time at the same venue and totally included in your ticket. If you are into that squishy social stuff... (ed. Heeey! I like the squishy social stuff! I'll be there!)

Koumbit has been building services based on the Aegir Hosting System, in one way or another, since the project's inception. So it's with great pride (and a little relief) that we happily announce the public availability of our latest. Our AegirVPS services provide managed, dedicated virtual servers with Aegir fully installed, monitored, maintained and supported.

Aegir is the only fully free and open source distributed provisioning system for Drupal. It allows you to manage anywhere from a few sites for a single organization, to thousands of sites across as many concurrent instances of Drupal, on as many servers, and for as many clients as you need. Since it's all built on Drupal and Drush, it can be customized and extended using all the tools our community is already familiar with. We at Koumbit are fully committed to keeping software free (as in freedom), so not only have we been building a great service, we've been making sure to build it on an entirely open source software stack.

Koumbit first engaged in the Aegir project from its very beginning, over 4 years ago, as active users, developers and maintainers. Since then, we've come to run all of our Drupal development and hosting on Aegir, maintaining and contributing regularly both to the core project, as well as a number of extensions to, among other things:

Eventually, some larger clients warranted having dedicated Aegir servers setup for their own teams of developers, with custom platforms, high performance caching and high availability clusters. This led us to develop Debian packages and Puppet modules to make their maintenance and support more consistent, reliable, secure and all around easier.

From there, the natural evolution was to continue this trend of automation, giving clients control of Puppet-based configuration management, adding a helpdesk, dedicated client support tools, documentation, and automated platform maintenance. Almost one year later, Koumbit's AegirVPS services are now fully online and prepared for broader release.

To do real justice to the features and other aspects of our AegirVPS services, we've begun publishing a new series of articles. These are intended to be part tutorial, part requests for comment, and all shameless plugs for our new AegirVPSs. The first of these covers our innovative configuration deployment mechanism.

Most importantly though, this service begins and ends with the people involved. Our team comprises seasoned and talented sysadmins, designers, developers, themers, support and accounting staff. Years of experience on the part of this whole team, as well as our intrepid early-adopter clients, have helped shape the Aegir Project as a whole, from reporting and fixing bugs, to suggesting, building, testing and refining features. I'm very proud to be a part of it.

Many of you might be familiar with the module Skinr (http://drupal.org/project/skinr). It gained a lot of support back in Drupal 6 by providing an easier, albeit somewhat verbose, way of dynamically configuring how elements on your site are styled.

When I first started using Skinr, it worked as advertised; however, it ultimately left me with a bitter taste in my mouth. I felt like I was constantly trying to navigate an endless maze while blind-folded. There were options upon options and fieldsets within fieldsets. It had almost everything but the kitchen sink in it.

I never really have been one who enjoys feeling like I’m just wasting my time. So I eventually scrapped this module as being a potential long term candidate for managing customizable styling. Apparently I wasn’t the only one who had these concerns either.

Then the 2.x branch was born. I only started using the 2.x branch because it was the only one available for Drupal 7. They had completely scrapped the old interface and did an amazing amount of work to make Skinr easily maintainable. You can view a list of changes here: http://groups.drupal.org/node/53798.

So if any of you are like me, you probably were thinking: “Skinr, in Drupal 7? I don’t want to make this any more confusing than it has to be!” Well fear not! You can learn how to start Skinr-ing in just 7 easy steps!

The biggest revolution in the history of marketing is in full swing. If you are in business and you are on the web, you are in the publishing business. You need to publish great content that attracts audiences and gets them excited about your brand – and, oh yeah, gets them telling their friends.

However, rarely do organizations optimally leverage the strategies for how to write extraordinary content with the full power of the tools for publishing. The key is to bring them together.

Content strategies tell you how to position, write and optimize your copy – or other media. Content management systems such as Drupal are your online printing press.

In this video we cover the essentials of bringing content strategy and content management together in one streamlined framework. In the next video we will cover content promotion through search engine optimization and social media.

Here's your chance to win a free copy of the Drupal 7 Mobile Web Development Beginner's Guide, just by commenting!

How you can win:

To win your copy of this book, all you need to do is come up with a comment below highlighting the reason "why you would like to win this book”.

Duration of the contest & selection of winners

The contest is valid for 7 days, and is open to everyone. Winners will be selected on the basis of their comment posted (*or at random, whichever suitable*)

About the book

allows readers to implement audio, video, charting and mapping solutions that work on Mobile, Tablet, and Desktop browsers. Written by Tom Stovall, this one of a kind book will enable the reader to set up Domain Access and Drupal Behaviors that redirect mobile and desktop browsers to the version of the website most appropriate for their client.

To make the transition to a mobile site easier, this guide includes a ‘Mom & Pop’ restaurant site along with fun examples of a family pizza restaurant that help the readers adapt their websites to those that are fully functional in a mobile environment. Besides sharing the content across sites without resorting to a multi-site install, readers can customize themes that will present their site with a unified marketing message.

We have some really fantastic clients here at LevelTen and have worked very hard with them to explain how web development works. We find though that there are many people that just have no idea what the process is about. Over the last couple of years of working with a fantastic team here at LevelTen, it has occurred to me just how similar building a website is to building a house. Turns out this analogy helps a lot in explaining what we do and how we do it. It also helps set expectations along the way.

If you were to start building a house today, would you start by hiring an interior decorator? Of course not! You also wouldn't ask them to design the structure of your house and yet this is often what happens when building websites. It is important to gather together all the right people necessary to build a website just like when building a house.

General Contractor/Project Manager

First there is the general contractor. For a house, he is the main contact that you, the customer, will have with all of the other people building the house. You may talk to the other people but at the end of the day he is the one responsible for the whole project, and to make sure that everyone has what they need and works together. At LevelTen we call this person the Project Manager.

Next up you would probably go talk to an architect to design the house. What's interesting at this point is that no one is really talking all that much about the color of the paint on the walls and which pictures go where. Generally you are just trying to get an outline of the house, which rooms go where and how big they are. Then there is the engineering of making sure everything is livable and works right. At LevelTen, this is the Information Architect/Wireframers we have. Their main job is to talk to the client, get a good idea of the breadth of the project and then design the whole thing so it will work.

Architect/Information Architect

It is really important when talking to an Information Architect to make sure that you don't leave stuff out. Could you imagine getting near the end of building your new house and then remember that you wanted a media room right in the middle. How much would it cost to squeeze another room there? That happens all the time in the web world. If we don't know what is going on up front then we can't plan for it and it is going to take a lot of effort (read time and money!) later on.

Builders/Developers

The next person who builds a house is the actual builders. They lay the foundation, raise the walls and make sure everything the plumbing and wiring is done. In typical house construction this usually takes about 40% of the budget to get the foundation laid and the walls framed. This again is very similar to web development. At LevelTen we have a very talented group of developers who use Drupal to build some incredibly feature rich websites very quickly. These are the builders of your website. They are going to be building the content types and views (roughly like rooms) and make sure that all the modules are set up (roughly like plumbing and electrical).

It's at this point that a web shop that really understands Drupal is going to stand out. It takes a while to really learn how to build a Drupal website the right way. We've seen plenty done the wrong way. So finding someone who can do it right the first time is important. Would you hire someone who only has experience building brick buildings to build your wooden framed house? I sure hope not.

Interior Decorator/Graphic Designer

Once the building of your house is done the Interior Decorator takes over. This is the person that picks out the colors, bricks, furniture and finishing touches for the site. So much of this process is dependent on the preferences of the person who will be living in the house that this is a bit of an art to get right. Again we've got a couple of very talented designers here at LevelTen that can make amazing designs for websites. They will pick out the colors and pictures (kinda like furniture!) and make sure everything looks good together.

Painters-Movers/Themers

Of course once the whole design is in place you are going to need The painters and movers to actually get the house looking like the design. This is analogous to our themers. It actually takes quite a bit of work to get all the images, colors and everything set up so that a very dynamic site will always look good. This is one area where I'd say the web world is actually harder than the physical world. Mostly because everything has to be done in such a way that it looks good on hundreds of pages instead of just one room. Luckily we've got a great themer with an eye on detail and a complete understanding of all the different browser quirks.

Occupants/Content Authors

At this point the house and website are more or less done but there is still one thing missing and that is the occupants! In houses people live in them, move around in them and are constantly doing things. With websites that is the role of the Content Authors. In order to have a really great website you need to constantly create and update your content. This is very much like the people that live and breathe in a house. You are going to need some people to live an breathe in your website as well. At LevelTen we typically don't do a lot of the content authoring since that is so specific to the company that it turns out much better when done by someone internal. We do, however, do a lot of training using the newly built website and how to best write for

Security and Maintenance

There is one other group of people that is important for houses and websites but is often overlooked. They are the people who secure and maintain the house. Think of the security monitoring firm and the handyman. Websites need the same monitoring and maintenance. You wouldn't build a house, furnish it and then not look at it again for 3 years would you? Websites are the same way. You need to keep them secure and maintained. LevelTen also offers a support contract that handles this, which allows our expert team to keep your website secure, up to date, and working.

So next time you think about building a website, be sure to not just look for some pretty designs but find a team that has all the skills necessary to take your project through the full process to create a successful website. I can honestly say that our team here at LevelTen is the best team I have ever seen for building websites, and it is an honor and privilege to be working with them.

I just picked up my copy of Drupal 7 Mobile Web Development and I must say that from start to finish it's packed full of useful, practical and relevant information. From the first chapter on "When is a Phone not a Phone?" straight through to the final chapter on on "A Home in the Clouds", Stovall writes in a very readable, engaging style with logical headings, pop quizzes and actions.

Chapter 1: "When a Phone is not a Phone?"This chapter documents the evolution of phones from "dumb" phones into "smart" and even "smart-er" phones and the role that HTML standards and CMSes like Drupal have played and can play in optimizing users' experiences on these devices. He covers the development of WAP as well as WebKit and the similarities and differences between the various mobile OSs. He briefly covers the advent of tablets and explains that "mobile" is a context-driven thing based on the context of the user, not the machine. He then goes into how to setup devel environments for both iOS and Android and mentions some mobile simulators.

*Chapter 2: "Setting up a Local Development Environment"Here Stovall goes over the basics of setting up a development environment for Drupal that follows best practices and uses a sample site example to guide the reader. Topics include: Drush and Drush Make, SCM (GIT, SVN), AMP/WAMP/MAMP stack and others. This chapter alone is worth buying the book for as it's essential to get this part right.*Note: The author, Tom Stovall, asked me to post a link to this blog post on his blog as part of the intsall process for Drush changed post-publication of this book. Here is the post.

Chapter 3: "Selecting the Right Domain for your Mobile Site"This chapter covers design issues and how to manage domains for serving same content to different host domains with different Drupal designs (themes). It also covers some aspects of backing up, migrating and deployment of mobile designs as well as how to use a User Acceptance Testing (UAT) environment.

Chapter 4: "Introduction to a Theme"This chapter covers theming in Drupal 7 (versus Drupal 6) for mobile environments including the changes HTML5 brings to this process and "progressively enhancing" a mobile site. This chapter also covers, albeit somewhat surprisingly, semantic concepts and RDF and how these standards interact with HTML5 and a section on redirecting users to the mobile version of your site with Javascript, among other topics.

Chapter 5: "A Home with a View"This chapter discusses strategies for homepage design in a mobile environment including modules that assist with this process.

Chapter 6: "The Elephant in the Room: Audio, Video and Flash Media"This chapters covers dealing with multimedia files and streaming in a mobile environment including the Media module and strategies for dealing with media files in different mobile environments and on different devices and how HTML5 and CSS3 will change the playing field.

Chapter 7: "Location, Location, Location"This chapter is about using the Location and GMap modules and other tools to create a rich mobile experience for users with geolocation information and interactivity.

Chapter 8: "Services with a Smile"This chapter covers web services mainly focusing on REST with JSON data within the Services module where SOAP and other protocols and formats are also supported.

Chapter 9: "Putting it Together"This chapter brings it all together to publish a live, mobile site using build modes and the Display Suite module as well as jQuery Mobile and its AJAX features and dealing with fonts and launching the site.

Chapter 10: "Tabula Rasa: Nurturing your Site for Tablets"This chapter covers the history of tablets and how "touch events" differ from mouse events. Stovall covers adaptive web page designs and using CSS for multiple use cases.

Chapter 11: "A Home in the Clouds"This final chapter covers setting up a virtual hosting account and related "cloud-based" issues. Topics include using RightSpace virtual host management, cloning servers, code deployment with Jenkins and GitHub and other related issues.

We only have experience with one Drupal 7 site so far and have sruggled a bit to optimize it for mobile devices. After reading Stovall's book, I feel better-prepared to tackle the mobile development tasks ahead. I highly recommend this book to anyone working with Drupal websites.