https://www.lullabot.comRSS for NodeMon, 03 Aug 2015 00:24:25 GMTEstimating a software project can seem as useful for predicting the future as gazing into a crystal ball—a crystal ball fogged over with unclear requirements, buzzwords, and hand waving. Giving your client a single number for their project is difficult at best.

This is especially true with fixed-bid contracts. Development shops are often at a disadvantage, guessing at solutions early in the product development cycle, and assuming a disproportionate share of the risk should things go awry.

Now that we’re done whining…

On the other hand, clients who are paying money for a website or other development work want to know how much their website will cost, and how long it will take. This is a reasonable expectation, but certainty is often hard to come by until very late in the development process.

Handing your client a napkin with a number written on it is the worst idea ever. A single-number estimate amounts to a promise, and puts the uncertainty and its corresponding risks entirely on your shoulders.

Instead of a single value estimate, it’s more truthful to supply a range.

Estimate ranges are the new hotness

It would be great to share your estimate with a clear probability, something like, "We’re 80% sure this will take 5 weeks." However, it’s hard to do an estimation that arrives at a strict probability without historical data and clear requirements.

An acceptable alternative to computed probability is to present your estimates as a range of time to completion: "this will take between 4 and 7 weeks."

This kind of estimate implicitly contains a probability: you’re saying there’s a 90-100% likelihood of a successful outcome with a 7-week timeline. You’re also telling the client that there’s a chance it could be completed sooner, but that would be a less likely outcome. In any case, you’ve set a minimum of 4 weeks for completion if everything goes perfectly.

How to talk about estimate ranges

Of course, when you show a client an estimate with large variances between the best and worst-case scenarios, it can be uncomfortable. It exposes the truth behind all software estimation: we’re guessing!

Even experienced estimators get it wrong, and the first time you put a range in front of a client, you might feel that you’re exposing yourself as an imposter. On the contrary, sometimes it’s the most savvy thing you can do.

When you show your client an estimate with wide variances, you can have a conversation about those items that are most uncertain. That can lead to clarified requirements, cuts in scope, or to a shared understanding that another round of planning is needed before an estimate can be made.

Back in the day…

In 2011, Seth Brown wrote about Lullabot’s estimation techniques. In The Art of Estimation, he shared how we break down a project into bite-sized tasks and use the Wideband Delphi method to facilitate discussion about the size and potential uncertainty for each task.

In 2012, Jerad Bitner posted An Update on the Art of Estimation, which described some improvements and adjustments to the process Seth wrote about, and shared a Google Sheet to let our readers try out the Wideband Delphi method for themselves.

Those articles are still very relevant. If you haven’t read them, I highly recommend you do so before proceeding. But we’ve got a new tool that I’d like to share, which takes what we’ve learned since then and brings it up to date.

As before, we’re using Wideband Delphi. You’ll need two (or more) people to estimate and compare the results. As a project manager, you might be one of them, or you may have two developers working with you while you facilitate the conversation.

Estimation requires dealing with uncertainties

Previous discussions about the Wideband Delphi method focused on comparing multiple estimates and arriving at an agreement about possible solutions. Since Seth and Jerad wrote their articles, I’ve done some reading that has extended our thinking and helped us refine that process.

Anyone who’s estimated more than a few projects has heard the old cliche ‘Make your best guess, and then double it’. That speaks to the notion that as developers and business people, we routinely underestimate the complexity of the work we take on. We smooth over uncertainties and make our best guess—but being optimistic in this case is a bad move. It’s better to recognize and even highlight areas where requirements aren’t clear.

How does uncertainty affect an estimate?

Software Estimation by Steve McConnell has a ton of great advice on all facets of this topic. It’s probably the best distillation of software estimation techniques I’ve run across, compiling research from academia and presenting it in a format that’s more accessible to working developers and project managers. In particular, he discusses how uncertainty can affect the accuracy of an estimate.

He describes how the COCOMO II software costing model identified that highly uncertain estimates can vary by a factor of 4. That would imply it’s not enough to simply double your guess. Other aspects of COCOMO II aren’t a great fit for agile projects and modern website development, but that finding was especially useful to me as an estimator

McConnell provides a suggested range of variance for estimation activities, based on the level of uncertainty in a given requirement. I borrowed those general rules about estimation ranges based on uncertainty and worked them into our spreadsheets here at Lullabot.

We now have a low-high range calculation for each estimate a developer makes. We can use that to drive internal discussions, as well as help our clients clarify their requirements before the project is fully underway.

Our New Estimation Tool

sheets for two developers to do their estimations, which borrows values from the master sheet.

a comparison sheet that takes those estimates and shows them side by side

subtotals to do post-estimate work like adding modifiers for project management or activities supporting development.

All of this helps reduce copy/paste work for the project manager, and makes it easy to see where developer estimates have varied, which helps focus the discussion.

Using the tool

Here’s how I recommend using the spreadsheet:

Clone the sheet

You’ll want a copy of the sheet you can actually work on—feel free to save a copy of ours and make your own modifications. There’s some instructions on the first sheet to help you get started.

Plan

You might be working with a list of desired features from a client, or a wireframe that you’re decomposing into components.

Whatever it is you’re estimating, you’ll want to break it into manageable chunks, and enter them on the ‘Master Estimate’ worksheet, along with a unique ID for each item. Those values will magically appear on the developer estimate sheets.

[image:{"fid":"2959","width":"full","border":false}]

Modify

Adjust the headings for columns E-I to match your process. I’ve included research, design, dev, theming and QA, but you may want to break your estimate up differently.

[image:{"fid":"2958","width":"full","border":false}]

You can rename the developer sheets if you wish, to the names of the folks who will be estimating the work. Just don’t edit the grey columns, because they contain the formulas that make all this work.

You can also add more developers to the estimation process if you feel it’s needed—that would require some copy-and-paste, and a little twiddling of formulas on the comparison sheet. I’ll leave that up to your ingenuity!

Estimate!

Here’s what to do for each row:

Review the requirements

Add a proposed solution

Indicate the level of uncertainty from the dropdown provided

Supply estimates (in hours) for any applicable categories

[image:{"fid":"2950","width":"full","border":false}]

Compare

Using the comparison worksheet, you can identify line items where solutions and estimates differ. Filter on the standard deviation column if needed to spot estimates with a lot of variance, and then discuss them with your team to arrive at an approach everyone is comfortable with.

[image:{"fid":"2951","width":"full","border":false}]

Deal with that uncertainty!

With that estimate in hand, you can decide whether to have an open meeting with the client and share the full sheet, warts and all, or if you want to clean things up and present a more limited version, with talking points for the main uncertainties.

How your client works with you during that conversation might tell you a lot about whether they’re a good match for your team, and if they’re able to work flexibly with the parts of a project that aren’t clear at the outset.

However you do it, Wideband Delphi lets you check your assumptions internally. Then you can take the resulting range and present it, or work with your client to narrow it first before offering a formal estimate. Either approach beats handing the client a single number and hoping for the best.

]]>https://www.lullabot.com/articles/handling-uncertainty-when-estimating-software-projectsbff40afa-60f6-44fb-8420-ea79a18b5541Fri, 24 Jul 2015 18:21:14 GMTDrush is great! I can’t manage Drupal without it. But now that Drupal 8 is nearing release I’ve run into a big problem. Drupal 8 requires the bleeding edge version of Drush, but that version of Drush won’t work with older Drupal sites. In particular, I was running into problems trying to switch between a Drupal 6 site and the Drupal 8 site I’m trying to migrate it into. Drupal 6 works with nothing later than Drush version 5, but Drupal 8 requires a minimum of Drush version 8! And in the meantime I’m still working on several Drupal 7 sites which have Drush scripts that only work with Drush version 6 or 7. What I needed was an easy way to switch versions of Drush for the task at hand.

I combed the web for instructions on how to switch Drush versions on a Mac and didn’t find what I needed. But I did find several articles that had parts of the answer. So I stitched things together and came up with the following system based on Composer.

1) Install Composer

Composer is the recommended method of installing Drush these days, certainly for the bleeding edge version. I’ll need composer to work with Drupal 8, so this makes sense anyway. It’s pretty easy to install following the instructions at https://getcomposer.org/doc/00-intro.md#globally.

I previously had Drush installed with homebrew and wanted to get rid of that installation, so I had to do this:

brew remove --force drush

Then I installed a default version of Drush, Drush version 7, globally:

composer global require drush/drush:7.*

2) Pick a Location

I decided to go whole hog and create a way to switch between every version I might need, Drush 5, 6, 7, or 8, by creating directories for each of these. I could do this anywhere, but the most logical place seemed to be in my user directory.

Since I installed Drush version 7 globally, anytime I type “drush” without a version modifier, it will default to using Drush 7. Because of that I could have skipped the installation of the Drush7 version above, but I decided I liked the idea of having both a global default (that I might change later) and a definite way to invoke version 7 that will work without knowing or caring what the global default is.

8) Test The Aliases

To test the finished system, I made sure the aliases work as designed. I opened a new terminal window (so it picks up the changes in the bash profile) and typed:

drush5 --version
drush6 --version
drush7 --version
drush8 --version

From this point on any time I need to run a drush script that uses a particular version of drush I just need to use my new aliases to do so:

drush5 status
drush6 cc all
drush7 sql-sync

9) Profit

That’s it. Now I know I can control the drush version by adjusting my commands to invoke the right version.

]]>https://www.lullabot.com/articles/switching-drush-versionsf16ab334-b10c-4550-9db8-9106d4a44f4aWed, 15 Jul 2015 12:00:00 GMTThere are many services out there that want to talk to your application. An event happens, such as a new subscriber, and the service wants to tell you about it. Often, this functionality is implemented via a webhook pattern, where your application exposes a public url for the express purpose of receiving such communication.

Let me offer an example. I recently needed Mailchimp to send me a notification whenever a campaign was sent to one of our mailing lists. But how to test this? Mailchimp needed a URL it could access publicly, but I didn’t want to test anything on a live, public server. That would be time consuming, and probably a little dangerous. I needed to expose a public URL from my local machine.

There are a few tools that offer this functionality, but the best was ngrok. After downloading the tool, you can run it via the command line. The following commands assume a linux/Mac computer. If you are on Windows, just ignore the “./” in front of the command.

./ngrok http 80

This brings up a UI with some information.

[image:{"fid":"2932","width":"full","border":false}]

You get a temporary hash subdomain (03fcdb2a) that is now forwarded to your localhost. And you can see connections as they happen. Pretty cool. This is the domain I needed to put into the Mailchimp settings, so it knows where to send updates.

If your app is actually running at localhost on port 80, this is all you need to do. Chances are, however, that you have more than one development environment. How do you point to the right one?

Virtual Hosts and Virtual Machines

If you are running Apache locally as your webserver, you might have several virtual hosts set up to respond either to different port numbers or different host headers. If you have distinguished them with ports, just put a different port number in the command.

./ngrok http 5000

If your virtual hosts will only respond to a certain host header, there’s an option for that:

./ngrok http -host-header=example.local 80

If you use vagrant and virtual machines, ngrok can work for you too. Just ensure that traffic is forwarded to the local hostname of your vagrant instance:

./ngrok http -host-header=example.local example.local:80

Now we have a public URL that is forwarded to a place on our local machine. An outside service can send us data. But now we need to test how everything is working together, and ngrok provides some tools that make this easier.

Faster Development with Replay

One of the most useful features of ngrok is the web interface. It provides local access to one for each connection you forward.

[image:{"fid":"2933","width":"full","border":false}]

Here you can view both the request you receive and the response you send. The default is an easy summary of the post data, but you can view it raw, or even in binary if you want. If you’re having issues, this dashboard is one of the first places you should look, since it can even help you pinpoint problems coming from the source service.

The most useful feature of this dashboard is the replay functionality. It’d be tedious to send a test campaign every time I wanted to test the webhook from Mailchimp. Instead, I click the replay button, and ngrok will resend the same request.

Similar Tools

There are other tools, like Ultrahook and localtunnel, that offer similar functionality, but I found ngrok the best to work with for a few reasons:

It has no dependencies. No need to mess with Ruby gems or npm.

Lots of flexibility that is well documented. Add your own subdomain or custom domain, HTTP authentication, and more. Modifying the host header, as I did above, wasn’t possible with some of the other tools.

The user-friendly dashboard with replay functionality is specifically made for developing for things like webhooks.

Bonus: Responsive Design Testing and Other Collaboration

Developing responsive websites becomes a little easier with the help of ngrok. Do you have several devices you want to test your web application on, but don’t want to push anything up to a publicly accessible server? Serve it from your local machine.

If Mailchimp and other services can reach your localhost, so can your own collection of various-sized devices. And so can your co-workers, no matter where they are located.

]]>https://www.lullabot.com/articles/exposing-callback-and-webhook-urls-on-localhostf003a4e4-f0a9-46cd-bdb8-c449a71577afTue, 14 Jul 2015 16:14:20 GMTOut of the box, Drupal offers the rudimentary ability to send automatic email notifications, such as Account Activation or Password Recovery emails. These are examples of what are called transactional emails. A transactional email is a message that is sent to an individual (not part of a mass-mailing), in response to a certain event. This event can be an action taken by some user, and sometimes even a lack of action. Besides the two examples already given, some more examples of transactional emails are:

An order summary sent to a customer.

“Jason has commented on your article.”

A listing of upcoming events for groups a user has subscribed to.

Depending on your site, this type of email can range from something that’s merely “nice to have” all the way to a feature that’s critical for your business.

For emails that are critical (say, a shipping confirmation for an ecommerce site), ensuring delivery of these emails is important. Drupal, by default, will use PHP’s mail function. This is fine for simple applications, but falls short in at least two ways:

Reputation. Mail you send could be considered spam, and doesn’t reach your users, even though there are no error messages. Some email providers automatically block all email initiated by PHP mail, and sometimes your server’s IP address could be on one or more blacklists. The latter is a problem that can require constant monitoring to ensure it doesn’t affect you.

Analytics. You get almost no information on who opened your mail, delivery success rate, or clicks on included links. You’re sending mail blind.

Mandrill, a service by Mailchimp, aims to help solve these issues, and with the Mandrill Drupal module, it becomes easy to integrate. After you sign up for a free Mandrill account, create your first API key.

Go to Configuration->Mandrill and enter your new API key.

[image:{"fid":"2934","width":"full","border":false}]

After entering a valid API key, you’ll be presented with a lot more options, including a “Send Test Mail” tab at the top. You’ll want to send a test email to ensure everything is working properly.

At this point, Drupal is still using PHP mail to send all of its email, so be sure and go to Configuration->Mail System and change Site-wide Default to use the MandrillMailSystem. Its possible to be more granular with what types of emails you actually send through Mandrill, but for now, this setting will get you started, and is fine for most use cases.

[image:{"fid":"2935","width":"full","border":false}]

Drupal will now use Mandrill to send all of its mail, and you get access to the rich set of reports so you have a better idea of what is going on. That’s all you need to do to ensure more reliable delivery of your site’s mail, with performance analytics.

As a nice bonus, each kind of email that Drupal sends has a unique identifier, and the module automatically tags emails with this identifier before forwarding them to Mandrill for final processing. These tags can be used to filter reports or to perform A/B split tests on your email content to help improve user engagement. Below, you can see how easy it is to target password recovery emails for split testing.

[image:{"fid":"2936","width":"full","border":false}]

The Mandrill module comes with a sub-module called Mandrill Reporting that gives you some basic reporting from within Drupal itself, but you’ll probably want to stick with the native Mandrill interface so you are sure to get all the details.

Another included submodule is Mandrill Template, which allows different types of emails to each be wrapped in different templates. This is a more advanced use case and requires additional code and knowledge of Mailchimp’s Merge Tags to take full advantage of, but the possibilities are there.

There are similar services that solve the same problems, like Postmark and Sendgrid, and each has their respective Drupal module as well. However, if you are looking for a service that includes a very attractive free tier (12,000 free emails per month!) along with a mature, active, and heavily backed module, Mandrill could be the right choice for you.

Update 7/23/2015: Shortly after publication, Mandrill changed their free tier to just 2000 emails. This is considerably lower than both Postmark, who provides 25,000 free emails, and Sendgrid, who recently added a 12,000 per month free tier. If you like the simplicity of use, feature set, and want to stay in the Mailchimp family, Mandrill is still a good option. Postmark has its own stable Drupal module, but it will cost more over the long run, and if you want to do A/B split testing, you'll probably need the help of another tool.

]]>https://www.lullabot.com/articles/module-monday-mandrilla95061b7-4e1a-4322-897f-d903bfbaf0baMon, 13 Jul 2015 21:14:46 GMTAt Lullabot, our secret ingredient for designing truly meaningful solutions is thoughtful, yet lean, research. Whether surveying users or gathering screenshots for market research, we work toward delivering the most value for our clients in the shortest amount of time. To save time without sacrificing value, you need the right tools.

Github Wikis

We recently switched from Basecamp to using Github + Huboard for publishing documentation. The simple text editor for Git wikis helps keep our research findings in a clean and focused format. It’s also easy to create a central wiki menu linking to all documentation. We have also found that aligning our tools and processes with our development team is a huge positive; using fewer tools across our entire team can help us stay organized and speak the same project management language.

Typeform

We love using Typeform for surveying, both internally and externally. From creating a survey to taking a survey, the entire interface is beautiful and easy to use. Check out these example surveys. Not only are Typeform surveys mobile-ready, with beautiful, flexible styles, the survey experience is unlike any other survey tool available. We love the one-question-at-a-time experience. The Zapier integration with Typeform looks pretty cool too. We’re hoping to check this out more in the future, and sync our Typeform entry data other to other apps like Mailchimp.

[image:{"fid":"2923","width":"full","border":false}]

[image:{"fid":"2924","width":"full","border":false}]

MailChimp

We use MailChimp to organize contact information and email our user research recruits. It is our experience that MailChimp is the most well-integrated email management platform available. MailChimp allows us to target subscribers by their interests or preferences within a single list (groups) or create segments within lists based specific internal criteria (e.g. sending a new campaign only to the people who did not open the last survey email).
* What we’re still working on: We’re hoping to refine how our Typeform entries are integrated with our MailChimp list. Right now this process requires manual labor (copying and pasting).

[image:{"fid":"2925","width":"full","border":false}]

[image:{"fid":"2926","width":"full","border":false}]

Ethnio

We use Ethnio for recruiting users for interviews and surveys. Ethnio creates a web-ready recruiting widget (Ethnio calls it a “screener”) that you can add to your site with one line of code. We have enlisted over 70 recruits from Lullabot.com for user research. Ethnio's screener is awesome - set it and forget it - until you are ready to reach out to your recruits. Graphs of recruits, incentive payments, and a live dynamic recruits page are other awesome features. Twitter + Typeform have been a great duo for us. We like to blast a Typeform survey link directly to our followers for quick feedback.

Calendly

We recently used calendly.com to allow users to sign up for specific times for a remote interview. There are many apps like this one out there, but this one is working well for us at the moment. Calendly saved us a ton of time on back and forth email communication when setting up remote interviews.

Creonomy Board

We use Creonomy Board for collecting visual research as a team. The chrome extension allows you to quickly grab your choice of a screenshot or full page, then you can add the image to your shared boards, which you can categorize, add meta data, links, etc. The resulting board is a team-curated moodboard, which we use to start conversations around visual directions.

UserTesting.com

We have used UserTesting for DrupalizeMe.com design work. Usertesting’s tool allows you to capture video of users interacting with a visual design comp or working prototype. The highlight reel functionality is really useful for presenting research back to your team or stakeholders.

Google Docs and Sheets

I'm sure most readers will find this recommendation to be obvious, but if you aren't using Google Docs, you could be saving so much time on any kind of documentation. We use Google Docs and Google Sheets every day for live collaboration on documentation or note-taking. Whether we are taking notes as a team during remote user interviews, or collaborating on a research document, being able to collaborate and comment in the same live document is priceless. The “suggestion mode” tool within google docs allows a user to change a document and owner of the document can review each individual change and accept or approve the change. Game changer!

Summary

We’re always searching for amazing tools to help our research process become faster and cheaper, yet more valuable. If you have any suggestions or ideas for us, let us know!

]]>https://www.lullabot.com/articles/8-tools-for-a-leaner-design-research-process2fcccc6f-47f0-4a55-b319-d1cff947b016Wed, 01 Jul 2015 20:00:16 GMTThough a real-time interview is just one part of a well-rounded hiring process, it's an important one. A good interview isn't about checking off a list of qualifications—there are more efficient ways to do that. Rather, a good interview gives a candidate a platform to show their unique skills and personality. I've had the privilege of being involved in interviews both at Lullabot and at previous companies, and I've picked up a few guidelines along the way that help me conduct and evaluate an interview.

The Reasonable Truth

Interviewing for a position is hard for both the applicant and the interviewers. Many interviews walk a fine line between what questions are asked and what answers are reasonable to expect from a candidate. Sharing work products (such as, project plans, marketing materials, or code) give a real insight into what a person is best at. For candidates, this can be difficult when they are subject to client NDAs. Some may work at restrictive organizations that don't allow code contributions. Communication and personal interaction questions can have the same issues, where too much detail could reveal identities that should be kept private.

When I feel like something is missing, my mind begins to race with uncertainty. Were details omitted because of NDAs, or is the story being modified at the compromise of honesty? I like to ask myself, "Did the person tell the truth with the right amount of truthiness?" I want any lessons learned to be communicated by the candidate with authority and integrity. I don't want all the details. In fact, I appreciate the candidate respecting the confidentiality of their prior engagements. But what I need is an authentic and sometimes vulnerable conversation to get a glimpse of their character and what defines them. A candidate should be able to tell an honesty story about work they've done in the past even if they have to redact some specifics. If they play the confidentiality card to the exclusion of any insight about them personally, I'm left with too much uncertainty to recommend them.

The Three E's of Interviewing

I want to know if the candidate shows empathy towards their coworkers and their clients. I can't emphasize just how important this is. Empathy is our first defense against stress and discord, especially when projects march toward fixed launch dates. It's too easy for agencies and clients to start blaming each other when timelines, budgets, or functionality start to change to meet launch deadlines. Hiring for empathy gives our entire team the ability to handle more projects with less burnout.

I also like to get a chance to evaluate how a candidate approaches education. I don't just mean education in the strict institutional sense, but how they learn day by day, apply new lessons to their work, and share their experiences with their colleagues and the world. Seeing how a candidate writes and shares their knowledge provides insight into how they will share the same knowledge when working with clients.

Finally, though perhaps a bit unconventional, I like to get an idea of how entertained a candidate is by the work they do. I don't mean that someone finds their work to be "fun", to the detriment of balance in their life. I want to know if a candidate finds the humor of the crazy client and technical situations we sometimes end up in to be a positive, entertaining part of the work we do. This value isn't just about immediate team and client interactions. I've found that people who have this quality can both break the ice in tense situations and are more resilient against burnout.

A place to grow

In the agency world, we often are hiring to fill a specific role on a specific project. Unlike other tech industries (like entertainment and video games) that go through boom and bust hiring cycles, our goal is to consistently have low turnover with our staff. When considering a candidate, I like to ask myself "Can this person be a contributor immediately, and a leader in 6 months?". Every person we hire should be immediately useful to the team so that they can feel valued and important. I feel like it's not fair to assume a new hire becomes a "leader" instantly. Every person needs to find their niche and to find what drives them when tackling a new job. But what I want to know when we hire someone they have the potential to become a leader in something. This is rarely about managerial roles and responsibilities; instead, it's about empowering everyone to become experts in their own way. Lullabot needs leaders because of the way we work. Most of the team works directly with our clients. As we're a distributed company, we need proactive and deliberate communication. You can't hide in a cubicle here. Hiring for leadership helps ensure that those we do hire have the best chance of succeeding.

Like coding, an interview process is never done or complete. I'm sure there are guidelines I've missed. What are yours?

]]>https://www.lullabot.com/articles/three-steps-to-better-interviews1668 at https://www.lullabot.comThu, 25 Jun 2015 19:00:03 GMTIn our last post on Drupal 8 theming fundamentals, we learned to set up a theme and add our CSS and JavaScript. This time around we’re talking about the Twig templating engine, how to add regions to our theme, and then finish with a look at the wonderful debugging available in Drupal 8.

If you’ve been following the changes in store with Drupal 8 then you’ve heard about the Twig templating engine. It’s a product of SensioLabs, the company responsible for the Symfony framework, parts of which are being used in Drupal 8.

Before we get started, a brief reminder. When making any changes to your theme you’ll need to clear your cache to get them to take effect. You can either do this in the UI under /admin/config/development/performance or you can use Drush. The Drush command to clear caches in Drupal 8 is drush cache-rebuild or drush cr as a shortcut.

Dissecting a Twig Template

Let’s begin by taking a look at the markup for an actual Drupal 8 template. Templates in Drupal 8 have a common naming pattern:

Let’s go through this file and see what’s going on. The first thing we notice are the two different types of code blocks (also called block delimiters). One type of code block is marked by double curly braces like so:

{{ … }}

and the second is marked by a curly brace and percent sign:

{% … %}

These have been called the “say something” and “do something” types of code blocks, respectively.

At the very top of the file we see this code:

{%
set classes = [
'region',
'region-' ~ region|clean_class,
]
%}

The opening delimiter is a “do something” block. In this case, we’re using the set keyword to create a variable named ‘classes’ that has as its value an array—indicated by the [ … ] bracket notation that surrounds the values being defined.

This particular array has two class names stored inside of it. The first is simply ‘region’, but the second is a more specific class name with a syntax that may look a bit unfamiliar. It’s prefixed by ‘region-’ and then followed by a squiggly line with another bit of text.

'region-' ~ region|clean_class

Let’s dissect this piece by piece. Notice the ~ character. This is called a tilde and it is used to concatenate—that is, connect—two strings in Twig. Right after the tilde we see a reference to region. It’s not in quotation marks and that tells us it’s a variable that is available inside of this template rather than a string.

One way to know which variables are available in a template is to simply open up the template file. The variables should be listed in the top comments. For the sake of brevity I’ve excluded the comments in this example, but if you’re following along and want to review them, you can find this file under core/modules/system/templates. We’ll look at another useful way to inspect the variables on a page a bit later when we talk about debugging.

The final thing to note about this variable declaration is the |clean_class part which appears right after the region variable. This is a Twig filter. Filters are indicated by the pipe character | and are followed by the filter name. In this case, we have the clean_class filter which converts a string into an acceptable CSS class name.

Now that we see how this variable has been set, let’s move on to the next part of the template file:

We see another “do something” block which contains some simple logic to test if the content variable is present. This is a very good practice so that we aren’t printing empty blocks of markup to the page, which is what would happen if we weren’t checking this and the content variable was empty.

Next we see:

<div{{ attributes.addclass(classes) }}>

Because of the {{ … }} format, we know we have a “say something” block. In this case we are printing out the attributes for the content div. We’re passing in the classes variable we created in the previous step to the addClass method so that the classes are printed out.

As we’ll see in the debugging section below, there are tools that we now have that will make it easier for us to see what is inside these variables so that we can change them if needed.

The last bit of code in this file is simply {{ content }} which prints the content variable to the page. If you’re familiar with Drupal 7 theming, this new syntax probably doesn’t seem too bad.

Some advantages of using Twig as a templating engine are that it’s a mature, secure and well-documented system, the latter of which is very helpful if you get stuck.

Defining Page Regions

A common task for theme developers is defining the regions for the page template. These regions include the header, footer and main content areas where we can place blocks of content.

In our current theme ‘Atlas’, we haven’t defined any regions. By default Drupal 8 will assume the regions that are defined in the page.html.twig file located in core/modules/system/templates.

Changing these defaults is pretty straightforward. The first step is to return to our atlas.info.yml file and add the following:

I’ve added three basic regions, but you could add others as you see fit.

The next step is to copy the page.html.twig file from the core templates folder referenced above and place it in a folder named templates within our own theme.

In the screenshot below is a cleaned up version of page.html.twig that reflects the regions I just defined for my theme. You may not want to remove as much of the default markup as I have, but it’s helpful here for demonstration purposes.

[image:{"fid":"2472","width":"half","border":false}]

Let’s briefly go over this. First of all, notice that I’m checking to see if the header and footer regions exist. This is a good practice if a region will not be appearing on every page. It ensures that empty markup doesn’t get added to the page.

Placing the region on the template is as simple as adding {{ page.region_name }}. Replace region_name with the actual region name you want to add to the template.

What if we wanted to have different versions of this page template? For example, what if we wanted a different page template for the front page?

In that case all that we would have to do is save our page.html.twig file as page--front.html.twig, make our desired markup changes, clear cache and the new template will take over the front page of our site. The documentation on the naming conventions for overriding template files is quite good and should get you a long way in customizing your theme templates.

Twig Debugging

It’s great that we have the ability to override our template files, but we are still left with the question of how we are to know which template file needs to be overridden in any particular case. After all, template files are nested one inside the other—which one contains the markup that we want to change?

This brings us to the killer feature of Drupal 8 theming—Twig debugging. Getting started is very easy. Inside of the sites/default folder is a file named service.yml. Inside that file you will find a setting for Twig debug. This setting will be set to false by default. Simply change it to true as shown in the image below:

[image:{"fid":"2864","width":"full","border":false}]

As usual, clear your cache after making this change. When we return to our page and inspect the markup we will find debugging information added in the form of code comments. Here’s an example of what our home page markup will look like when inspected in DevTools:

[image:{"fid":"2869","width":"full","border":false}]

You can see the debug info provides the path to the new template file we have created. This information is repeated for other template files found on the page, providing a fast and easy way of finding the file you need to override or edit.

Important: Do not edit template files in core! Copy the file to your theme and edit that version instead.

Also included are file name suggestions for overriding the currently active templates. If you’ve ever had to track down the correct template file to override in Drupal 7, then you know what a huge improvement this is.

Update: David Rothstein pointed out in the comments that as of Drupal 7.33, this functionality has been backported to Drupal 7!

Finding Variables

Earlier I mentioned that there was a great way to inspect the variables available in a template. The syntax we use to print the variables is simple:

{{ dump() }}

This is placed inside the template where you’d like to see the available variables. In many cases this is going to result in a large number of variables being printed to the page. It may be helpful to narrow it down a bit, for example:

{{ dump(title) }}

This example will only print the title variable to the page. Take a look inside the template file for a list of the available variables—they will be listed in the comments at the top of the file.

Using the Atlas Starter Theme

Before we finish up I’d like to share the Atlas theme we’ve been working on throughout these posts. I’ve included it on GitHub so that if you’d like a quick start on creating a Drupal 8 theme you can simply download it and start hacking away.

In the coming days and weeks I’ll keep adding to Atlas to improve its usefulness. For example, there will be a simple Gulp workflow added for the CSS and JavaScript files as well as a few other commonly used tools and processes.

That’s it for the series! I hope you’ve enjoyed learning about Drupal 8 theming as much as I’ve enjoyed writing about it. Until the next time, happy coding!

]]>https://www.lullabot.com/articles/drupal-8-theming-fundamentals-part-21714 at https://www.lullabot.comThu, 18 Jun 2015 17:00:13 GMTBack in October, I received the Oculus Rift DK2 for my birthday and found what I think will be the future of how we build websites, interact with customers, and communicate with each other. I found a community of enthusiasts who are building virtual worlds with the very same concepts we use to build two-dimensional websites today. This community is pushing the boundaries of the web in a way that is unlike any other, and we’re having an absolute blast discovering the possibilities.

Let me paint a picture for you. Think first of the web as you know it today: text, images, and links to other websites, some of them on different domains. Today, the web is a flat surface, sometimes a highly complex and interactive document, but almost always a two-dimensional representation of information that you look at on your two-dimensional screen, which allows you to scroll up and down, left and right.

Now, instead, imagine a room with four walls. On the front wall you have a beautiful image in a picture frame, but this image looks more like a hologram, with depth. You can see that the river in the picture is closer to you and the mountain in the background farther away. Maybe a bird flies from the water and into the distance. On the wall to your right is a glowing doorway with a sign over the top that says, “Lullabot” and then an address such as you’re already familiar with, “http://www.lullabot.com”. You click on this glowing doorway and it disappears, allowing you to look through and see what is on the other side. Is this a website or a video game?

You walk through the now open doorway and find yourself in a brick building. Windows to your left and right reveal an otherworldly landscape. A life-size Lullabot sits in the corner beside a large flat screen with what looks like a two-dimensional website on it. In the middle of the room there’s a small area with gently swaying bamboo surrounded by rocks and wood chips—a little zen garden where you can walk around. As you walk, you see an open passageway to another part of the room where famous logos line the walls, and more doorways that—once you click and walk through—showcase how two-dimensional websites look and act on various devices. A large screen shows the MSNBC website at desktop resolution. A giant iPad beside that has the same website on it at tablet breakpoint, and a large floating iPhone beside that shows how the website looks and responds on a mobile device.

Sounds a bit crazy you say? Crazy or not, this is now a reality; virtual reality.

[image:{"fid":"2430","width":"full","border":false}]

To visit, the Lullabot Lounge I’ve described, download JanusVR from http://janusvr.com, launch the app, hit the tab key and type in this address: http://lullabot.github.io/Lullabot/loungeVR/. A new glowing portal will appear. Click on it and walk through! There you’ll be able to see the Lullabot Lounge in VR. The code and assets for this room are hosted on GitHub, so feel free to fork or send us pull requests!

Enabling the 3D Web

The community that I’m part of is based around a piece of free software called JanusVR. At it’s heart, Janus is a new web browser which can interpret a subset of XML we’re calling JanusML. Akin to what the ReactJS community has done with it’s JSX specification, JanusML extends XML for new applications.

JanusVR is built by one person currently, Dr. James McCrae, who hails from the University of Toronto. JanusVR makes it easy for anyone to create a three-dimensional website, fostering a community around this software. James has taken the concepts built into typical every-day web browsers like Chrome and Firefox and applied them to the way you build a three dimensional website. In fact, the original name of the project was “Firebox” and JanusML was “Firebox code.” In truth, you don’t even need the Oculus Rift or any other head mounted display to view these sites though it helps enhance the experience. If JanusVR doesn’t detect an Oculus Rift, it launches in 2D mode and you can use it without headgear. It’s also cross-platform and founder McCrae aims to keep it that way.

JanusML defines a way to load Assets such as images, audio, three-dimensional models, and more. You can then define coordinates and other attributes for them within the Room tag in order to build a three-dimensional space through which you or anyone else with an internet connection can navigate.

JanusVR represents you and any other user with an avatar chosen by default. Through the use of a open source multi-user server written in NodeJS, you can see other people’s avatars within these spaces and interact with them. You can download and run the janus-server code on your own server if you want and then specify the address of your server in your JanusML’s <Room> tag with the server attribute.

JanusVR also has a basic editing mode which allows you to collaborate on building these 3D spaces with the people you meet there. Take a look at some of the JanusVR Basics written up on VR Sites. There’s also basic JavaScript support for scripting your rooms. I’ve done a little experimenting with this in the form of a scavenger hunt for which you can see the code here or you can experience the 3D version in JanusVR by downloading Janus and entering this URL: http://bitgridio.github.io/LullabotVR/lullabot.html

A Dedicated Community

The JanusVR community mainly communicates via r/JanusVR (a subreddit forum) and via a Mumble server (for voice chat). The Mumble server is the current voice channel for the community until voice is built into Janus itself. New releases are announced on the Reddit forum and bug reports and feedback are discussed there as well. We also have an IRC channel on free node at #janusvr and #vrsites, and don’t forget to follow @officialjanusvr on twitter.

Everyone I’ve met in the community has been helpful and willing to take time to show me around the hundreds of rooms that have been built for Janus. I’ve been given advice on optimizing a Janus room, learning JanusML, and modeling with Blender. To get involved, begin with the following sticky post on the subreddit.

One of the first things I found while wandering around within JanusVR was that there was some dedicated hosting for JanusVR rooms called VR Sites. VR Sites is a hosted sandbox where you can upload assets and edit your JanusML. This was a fairly simple PHP system that allowed people to upload assets and edit some basic HTML files in order to create their JanusVR rooms. It made it quick and easy to get started without needing to run my own web server.

I soon got in touch with u/qster123 and started talking about hosting. As it turned out he was looking for help in maintaining the site and I soon convinced him to switch to Backdrop. We rebuilt most of the functionality in about an hour with Backdrop. We have a few more things on the roadmap, but overall it’s been an excellent experience working with qster123 and Backdrop both. To help out or get involved, drop me a line at u/sirkitree or in any of the other places previously mentioned.

What The Future Holds

The JanusVR community is still experimenting, brainstorming, and figuring out what the future holds for this new medium. We’re exploring possibilities for enterprise clients and how to present content in a three-dimensional space. Potential use cases include demonstrations of software and products in virtual spaces, educational VR field trips to faraway places, business and social collaboration, and even gaming. We’re discovering the current limitations and learning new skills such as 3D modeling and what UX means in such a world. If you’re interested in joining us, please contact us in one of the many mediums I’ve mentioned above. We can’t help but look forward to the future, and we hope you’ll help us bring it to life.

]]>https://www.lullabot.com/articles/the-cutting-edge-of-the-web-vr1709 at https://www.lullabot.comWed, 17 Jun 2015 18:00:00 GMTJavascript was traditionally the language of the web browser, performing computations directly on a user’s machine. This is referred to as “client-side” processing. With the advent of Node.js, JavaScript has become a compelling “server-side” language as well, which was traditionally the domain of languages like Java, Python and PHP.

In web development, an isomorphic application is one whose code (in this case, JavaScript) can run both in the server and the client. In this article we will inspect how an isomorphic application processes requests depending on where the request comes from.

We are currently revamping the front end of Lullabot.com using Node.js (a runtime environment for server side applications), React (an isomorphic JavaScript library built by Facebook), and CouchDB (a NoSQL database that has a REST API). These technologies will be the main actors in the following examples.

An isomorphic request, server side

In an isomorphic application, the first request made by the web browser is processed by the server while subsequent requests are processed by the client. We will use the following article as an example:

The asset /build/bundle.js is the React web application, which contains the list of routes, templates and components required to navigate through the site. We will see further down how /build/bundle.js assumes command of the navigation through the website after the first request. Here is a step-by-step list of how this first request was processed:

The request reached the web server and was passed to a Node.js application.

Node.js passed the request to React, which fetched the article’s data from CouchDB, built the full page, and returned it to Node.js.

The user will see the response in HTML, while the browser downloads the React application (/build/bundle.js) asynchronously.

Here is the trick: this same request is processed differently if instead of entering the article in the address bar and hitting Enter, we navigate to the list of articles and then click on the article’s link. Let’s try it out.

An isomorphic request, client side

Now that we have the article (and the application) loaded in the browser, let’s navigate to the list of articles and click on the same article we accessed in the previous section. Here is the link to the article that we will click:

React passed the article data to the article template and rendered it.

The main difference between this request and the one in the previous section is that this time the request did not hit our Node.js application. This is fast, efficient and made my jaw drop the first time I saw it.

Comparing server/client requests

The following screenshot summarizes the above interactions by comparing them side by side:

Notice how the network activity log varies from one to another. For the one in the left we entered http://localhost:3000/articles/importing-huge-databases-faster in the address field of our browser and hit Enter; while in the other instance we clicked on the article listed at http://localhost:3000/articles. The end result is identical for the user, but using the server-side method (which is the common way of doing things today) took 40x as long for the basic page load, as using client-side processing.

Benefits of going isomorphic

I have fallen in love with this kind of architecture. In the past, we would use AJAX and a bit of server-side code to make certain interactions with the page dynamic, but this always involved duplicating logic across both PHP and JavaScript. The most common example of this is a pager, where a direct request to ?page=2 is processed differently than a click on a pager link that takes you to page 2. Here is what we gain by building an isomorphic application:

Old devices can browse our site because the application returns HTML, which differs from common Single Page Applications (SPA), where the tag contains JavaScript.

The first page request is fast and subsequent ones are even faster; as opposed to common SPAs, where the first request is used to load the application and then a round trip is made to fetch what was requested.

There is less code, as it is shared by both the client and the server.

Risks of isomorphic applications

There is a considerable learning curve when building an isomorphic application for the first time. In our case, when we rebuilt Lullabot.com, only Jeff Eaton and Sally Young were familiar with how isomorphic applications worked. The rest of us had to learn along the way and—while it was a mind-blowing experience—that can be frustrating as well.

Here are some of the challenges where I found myself banging my head against the wall:

Debugging is trickier

I struggled to debug code as the first request runs in the server and the second in the client. For the former I had to set up a Node.js debugger while for the later I had to learn how Browserify packs JavaScript files in order to use the web browser’s debugger plus the React Developer Tools Chrome extension.

Bye bye JQuery

We decided not to use JQuery as when React runs in the server there is no DOM to manipulate. For cases where we had to process Rich Text we used Cheerio, a leaner alternative. This made me realize how dependent I was on JQuery when I struggled to write a POST request using XMLHttpRequest instead of $.post().

Taking into account where the code would run

I had to be aware at all times whether the code that I was writing would run just in the server or also on the client because the client doesn't have access to the file system, which excludes many Node.js popular modules from being used.

Avoid exposing sensitive data

Every module required via Browserify would end up being part of the web application (build/bundle.js). This meant that we had to be cautious on which modules to require client side. For example, I almost exposed our MailChimp API keys by mistake while writing the Newsletter component.

Managing settings

We had to manage two sets of settings: server side settings (such as API keys and other credentials) and client side settings (such as the hostname of CouchDB, ElasticSearch and other resources). Some of these settings vary in local, development and production environments and we were not sure of how to make it easy to manage that. We found a solution by mixing the envify and dotenv modules.

Conclusion

There are advantages and risks when writing isomorphic applications. The technology is certainly exciting and I encourage you to try any of the available isomorphic libraries and see where it fits best in your scenario. Then, feel free to post your thoughts in a comment below.

]]>https://www.lullabot.com/articles/what-is-an-isomorphic-application1711 at https://www.lullabot.comWed, 10 Jun 2015 08:53:02 GMTThere are a lot of challenges within responsive web design, and one that that has constantly been a pain is triggering JavaScript based on the current CSS media query breakpoint. The problem is that the breakpoints are in CSS, which JavaScript has no native way to access. Many solutions (including window.matchMedia(), and Enquire.js) involve declaring your breakpoints in both CSS and JS, or require IE10+. The problem with these solutions is that when you change a breakpoint value, you have to change it twice. However, it doesn't need to be like this.

A Simpler Solution

A quick and easy solution to this problem is to have your JS import the breakpoints directly from the CSS values in the DOM. This solution brings the current breakpoint variable into your JS in a way that's

There's a couple things going on here. I'm querying the content property on the ::before pseudo element using this method popularized by David Walsh. I can't attach the content property directly to the body tag, because Internet Explorer 9 will return a value of "normal" when querying. IE10 and IE11 work fine. Ugh!

Firefox and IE return the value with double quotes, while other browsers do not. To get consistent values I'm using replace() with regex to strip those out.

Trigger on resize and page load

Breakpoints change based on your browser's viewport width, so I need to update the value when the browser is resized. I also trigger a resize event on the initial page load to get the first value.

Sample Use Case

While redeveloping the website for Syfy.com, I had an interesting problem where I needed to inject a 728x90 leaderboard advertisement within the second row of tiles. The difficult part of this is that the number of tiles per row changes depending on the current breakpoint.

4. Run the JS on browser resize and page load

Conclusion

This is a simplification of a very useful technique. The same logic can also be used to pass breakpoint states of elements by modifying pseudo-elements of any element within the DOM. For example, if you have an element that has a CSS change between multiple breakpoints, you can pass and query the pseudo-element, or can even query for the changing CSS property itself. Feel free to hack around and fork this over at Codepen.

]]>https://www.lullabot.com/articles/importing-css-breakpoints-into-javascript1686 at https://www.lullabot.comTue, 09 Jun 2015 17:00:00 GMTIn this series of posts we’re going to dig into some of the fundamentals of Drupal 8 theming. By the time we’re finished we’ll have a solid understanding of how to apply many of the new tools and techniques in our work. We’ll also have a starter theme we’ll be able to use in our future projects.

We’re going to begin by building the bare minimum required to get our theme working. We’ll create the basic file structure as well as a critical configuration file so that Drupal will recognize our theme and let us enable it.

Before we get started, a brief word on the current state of Drupal 8. At the time of this writing the latest version of Drupal 8 is beta10. That means some of the things below may change as we approach the release candidate. Nevertheless, the large majority of this tutorial should hold up just fine.

Adding YAML Config

If you’re familiar with Drupal 7 theming (or module development, for that matter), then you have probably worked with .INFO files before. If not, a .INFO file is found in the root of the theme or module. This file tells Drupal the theme exists and provides other important information. In Drupal 8, the old .INFO files are now gone and have been replaced by YAML files - pronounced “yamel” (rhymes with camel).

The move to YAML files is something that you’ll notice throughout Drupal 8. It’s a file format used in the Symfony PHP framework, parts of which are now used in Drupal 8. Fortunately, YAML is pretty straightforward and most folks won’t have trouble making this adjustment.

In a Drupal 8 theme, we create a .info.yml file. The naming and placement of this file is very important. It has to include the name of our theme and it has to be saved inside a folder with the same name as our theme. For this project I’m going to name the theme ‘Atlas’. Therefore, the .info.yml file will be named atlas.info.yml and will be placed inside a folder named ‘atlas’. If your theme has multiple words in its name, they should be separated by an underscore. For example, MY_THEME.info.yml.

The directory structure of Drupal 8 has changed so we won’t be placing this theme in the sites directory as you might expect from Drupal 7 (although that’s still an option if you really want to). Instead the file is placed under the ‘themes’ folder that is found in the root of your Drupal 8 installation.

Within the themes folder we can further organize our files into ‘contrib’ and ‘custom’ folders. Contrib themes would be those downloaded from Drupal.org - a base theme, for example. The custom theme folder is where we should place our ‘Atlas’ theme. When we’re done, the folder structure should look like this:

[image:{"fid":"2424","width":"full","border":false}]

Adding Required Mappings

In our info.yml file, we’re going to have to create a few key/value pairs called mappings.The four required mappings that we need to create in order for our theme to work are: name, description, type, and core. There are others we will add later, but for now all that we want to do is have Drupal see our theme so that we can enable it. Here’s what the file looks like with the required information added:

There are some rules regarding the use of quotes in YAML files. If your description has certain characters in it—an ampersand, for example—then you’ll need to put it in quotes. You can use either single or double quotes, just be consistent.

Enabling the Theme

There are two ways to enable a theme—by navigating to admin/appearance in your browser, or by using Drush. If you choose to use Drush, you’ll need to update to Drush 8 as it’s required for working with Drupal 8. Although this may be extra work upfront, I highly recommend it as it will save huge amounts of time going forward. Upgrading to Drush 8 may require that you maintain multiple versions of Drush. Many folks won’t run into this, but it’s something to keep in mind.

Once you have Drush 8 installed there is a new command for updating the default theme. To enable the Atlas theme we’ve created, execute the following Drush command:

drush config-set system.theme default atlas

Of course, replace ‘atlas’ in the command above if you’ve chosen a different name for your theme. The changed syntax for updating the default theme is due to the new configuration management system in Drupal 8. For more information on some of the other changes to Drush, this post by Aurelien Navarre is very useful.

Adding CSS and JavaScript

Before we really dig into adding our CSS and JS files, let’s get a handle on where things stand. We’ve created our theme and we’ve also enabled that theme. We don’t have anything in our theme besides the .info.yml file, so you would expect to see no styling on the page, and mostly, that’s what’s happening. Take a look at the screenshot below from my local machine. I’ve added some sample content using the Devel module for demonstration purposes.

[image:{"fid":"2423","width":"full","border":false}]

Above, I’ve used a red arrow to point out some styled content where there is padding around a menu item (the bullet has also been removed from the list item). The reason we’re seeing some styling on the page is because Drupal 8 has included the CSS from modules that are providing content to the page.

To identify these files (so that we can remove them) let’s have a look at the source.

[image:{"fid":"2420","width":"full","border":false}]

Drupal is adding four stylesheets from core. Whether or not you choose to remove these depends on your role. If you’re a front-end developer, then you should probably remove them and keep all of your CSS inside the theme which allows for a separation of concerns and is a best practice.

The good news is that if you do decide to remove them, it’s a snap. Let’s return to our info.yml file so that we can remove these files from our pages. The key that we’re going to be adding is stylesheets-remove. Here’s what our .info.yml file will look like when we’re done:

You’ll notice the new mapping at the end of the file. The key stylesheets-remove signals to Drupal that you want to remove one or more stylesheets. It’s followed by what is called a sequence in YAML - basically a list of the CSS files you’d like to remove.

These should be nested below stylesheets-remove, indented two spaces and prefixed with a dash. Together these elements are known as a collection in YAML. In standard YAML, the number of spaces the sequence is indented isn’t important, so long as it’s at least 1 space. In Drupal, we use two spaces.

Note: Although the stylesheet-remove key will work, it’s due to be phased out in Drupal 9 and replaced by libraries-override which will also be available in Drupal 8 and may provide you with more flexibility. For additional information, you can review this issue on Drupal.org. At the time of this writing, however, only stylesheets-remove will work.

Adding Our Libraries

In order to add our CSS and JavaScript we have to introduce a somewhat new concept for many theme developers—libraries. Don’t worry, this is going to be easy and there is a good reason for the change. We begin by creating a new YAML file in the root of our theme. This file should have the naming convention THEME_NAME.libraries.yml.

We’re adding our files in this way to provide consistency between the way front-end and module developers add asset files like CSS and JavaScript. Let’s take a look at the format of our libraries file which I’ve named atlas.libraries.yml

Let’s go over this. The basic structure is referred to as nested mappings and is very similar to what we did for the info.yml file. You’ll also notice that I’ve created two libraries—global-css and global-js. These are names that make sense to me for these global assets, but you can name them anything that you like.

Next we have a key that identifies what type of library we are adding. If you are adding CSS, you’ll next need to include the ‘theme’ key followed by the path to your CSS file(s). I’ve only added one CSS file in this example, but you could add as many as you need (a print stylesheet, for example). The curly braces allow you to add a value to the stylesheet path key - one good example of the usefulness of this is adding a media query for a stylesheet:

For the JavaScript library, we see something a bit different. We have a new key here called ‘dependencies’. In this case I’ve added jQuery to demonstrate what this might look like, but it’s important to note that core doesn’t add jQuery or other JavaScript by default on pages where it’s not needed. This will help keep Drupal websites nice and lean. Also notice that the dependencies need to be added as a sequence (the lines preceded by a dash).

Now that we have our libraries file, let’s add it to our theme. Returning to our info.yml file, we now see the following:

You’ll notice that we’ve added a new mapping with the key of ‘libraries’. Although adding CSS and JavaScript in this way may feel unfamiliar at first, it’s actually pretty easy to get the hang of and will help provide a standard way of dealing with these assets across an entire site.

Let’s take a look at the file structure we have have thus far:

[image:{"fid":"2422","width":"full","border":false}]

Although not visible in the image above, there is a script.js file in the js folder. One interesting thing is that the reference to these files we’ve just defined will be added to the page even if they don’t exist. Be sure you have added them correctly to avoid 404 errors.

Adding Libraries to Specific Pages

Something that often comes up is the need to add CSS or JavaScript to a single page - maybe you’re adding a JavaScript library to the front page for a fancy effect.

Doing this will require that we add a new file to our theme that should be named THEME_NAME.theme and it’s the successor to the template.php file in Drupal 7 theming. Let’s add atlas.theme in the root of our theme folder.

Next we’ll add the new library to our theme. When we’re done, our atlas.libraries.yml file should look something like this:

We now have a new library defined in our theme, but instead of attaching it globally in our info.yml file, we’re going to need to add it via a preprocess function in our atlas.theme file (yes, preprocess functions are alive and well in Drupal 8 theming).

Here’s how it would look to add this JavaScript file to our front page via a preprocess function:

In order to get the above function to work, you’re probably going to have to clear your cache. You can either do this in the UI under /admin/config/development/performance or you can use Drush. The Drush command to clear caches in Drupal 8 is drush cache-rebuild or drush cr as a shortcut.

Drupal 8 CSS Style Guidelines

There are some new guidelines for CSS in Drupal 8. Adoption of the SMACSS file structure is being strongly encouraged and I think this makes a lot of sense, particularly if you are going to be working on projects hosted on Drupal.org or with a team of other Drupal developers. It provides a nice standard that will make it easier for others to understand how to work with your theme.

Another thing you’ll notice from the guidelines is that class naming conventions are also being encouraged. Although it doesn’t explicitly mention BEM class naming in the docs (at least at the time of this writing), that is essentially what is being recommended. Again, if you’re working on projects that will be hosted on Drupal.org or with a team of Drupal developers, these are good practices to adopt.

The last thing we need to briefly cover concerns a new base theme in Drupal 8 called Classy. Addison Berry does a great job of providing the backstory on the origins of this new base theme, but the short version is that it serves to provide additional classes that you may be familiar with from previous versions of Drupal.

Those classes are no longer included by default and since I’m in the group of front-end developers that is glad to see them go, we won’t spend time on how to make use of them. However, it’s useful to know they are available if you’d prefer to add them to your work. Check out Bartik theme (the default theme in Drupal 8) for an example of how to set up Classy as a base theme.

That does it for this part of the series on Drupal 8 theming. In the next installment we’re going to dive into the new Twig templating engine in Drupal 8. We’ll also look at adding new regions to our theme as well as the excellent new debugging features. Until then, happy coding!

]]>https://www.lullabot.com/articles/drupal-8-theming-fundamentals-part-11708 at https://www.lullabot.comThu, 04 Jun 2015 19:00:00 GMTOn April 21, 2015, Google rolled out a set of changes to its search algorithm so sweeping it dubbed them "Mobilegeddon." Together, these updates dramatically boosted the impact of a site’s "mobile-friendliness" on its search rankings. Google says the changes will have "significant impact in our search results", though at least for now it only affects search results on mobile devices.

The name evokes disaster, a sudden and devastating change in how Google ranks sites that will cause widespread harm. The reality, though, is that real-world projects have had to be "mobile-friendly" for a long time. With 90% of American adults owning a cell phone, and 60% of them using their phones to access the internet [link], Google’s emphasis on mobile-friendliness should not come as a surprise. In October 2014 they added the Mobile Usability component to Webmaster Tools, urging web developers and businesses to get on board. “We strongly recommend you take a look at these issues in Webmaster Tools,” they said, “and think about how they might be resolved.”

Digging Into The Details

In the post-Mobilegeddon world, it’s important that a website do more than look good on a smartphone — Google must recognize its design as mobile-friendly, as well. Google is measuring mobile-friendliness using three main criteria:

Use generous tap target size and spacing. This relates to a user's ability to interact with the website. Tap targets should be at least 44px (wide and tall) and there should be a minimum of 32px between touch actions [link].

Avoid technology uncommon to mobile devices. Plugins like Flash and certain proprietary video players will not perform well or at all on a mobile device.

Display content without needless pinching, zooming, and scrolling. The days of letting the user pinch and swipe to access content have passed. Content needs to adjust to the viewable area in a way that is easily readable to the visitor.

The good news for many web developers (including your friendly neighborhood Lullabots) is that we have been ready for this for a long time. Designing sites with proper link spacing, avoiding flash, using mobile safe typography, and building with flexible layouts have all been a part of our best practices for a long time.

It’s been 5 years since Ethan Marcotte introduced us to Responsive Web Design. It became the answer to making a website look good on an iPhone. The greater value of Responsive Web Design was that it provided a process for ensuring images and content displayed in a pleasing and meaningful way regardless of the viewport size. I believe that Responsive Web Design served as a catalyst for developers to discuss and prepare for this paradigm shift a long time ago.

Into The Future

So, what does life look like after “Mobilegeddon?” We take solace in the fact that best practices prepared us for the change, we sharpen our pencils, grab a cup of coffee, and continue to look for solutions to the problems beyond the horizon. There is little doubt that there will be new challenges coming as businesses gain better understanding of their content and how to deliver it, of new revenue opportunities, and the future of advertising on the web.

Most importantly, what lies beyond the horizon is the needs of the users. We don’t know how those will change in the future, but we know that user expectations are always marching forward. Today there is an algorithm to judge mobile-friendliness; tomorrow it might be as immense as Virtual Reality or as inconspicuous as the face of a watch. Google search rankings are not a thing to be gamed and we’ve never approached it that way. It’s a byproduct of our desire to create quality websites that deliver content to the user cleanly and clearly, quickly and efficiently. The sites that do this well will be rewarded now and in the future.

]]>https://www.lullabot.com/articles/making-the-most-of-mobilegeddon1704 at https://www.lullabot.comWed, 27 May 2015 20:30:00 GMTDrupal is always changing. The community constantly reinvents Drupal with new code and reimagines Drupal with new words. This article seeks to examine the current narratives about Drupal. By examining the stories we tell about Drupal — the so called cultural constructions — we can better understand what is going well and what should be making us uncomfortable.

The dominant narrative surrounding Drupal 8 is that it will leave small websites behind, but that oversimplifies the situation. Focusing on this narrative ignores some of the more important issues facing Drupal, such as the influence of paid Drupal core developers on volunteerism, the personal connection that many people have with Drupal, or the importance of the GPL to Drupal’s longevity. The cultural constructions of Drupal sometimes change as quickly as the code, and this article will attempt to bring together a wide variety of competing narratives to reconsider why we use Drupal and challenge some of the prevailing constructions.

Drupal is for business

There have been quite a few articles published recently about Drupal and the enterprise, and many of them seem to take, as their point of departure, the following question: "Is Drupal 8 built for the enterprise?" When we dig a bit deeper into some of these narratives, it even starts to sound like the question might be, "Is Drupal 8 built by and for Acquia?"

Part of the answer to these questions seems rather settled. Yes, Drupal 8 is built with enterprise needs in mind. Yes, Acquia contributes a great deal of time and money to Drupal 8. I don't think these facts are in dispute.

Indeed, when Dries Buytaert, the creator of Drupal and co-founder of Acquia, talks to publications like Computerworld, he does not hide his intentions. He unabashedly makes statements about Drupal's future in the enterprise, such as:

"I think with small sites I'm not willing to give up on them but I think we just need to say we're more about big sites and less about small sites."

It would be fair to say that not all "big sites" are "enterprise" sites, or "corporate" sites, or even "money-making" sites, but I think we can also assume that many of them are. A quick look at the biggest sites on the web shows that most of them are the sites of for-profit companies. Big sites are generally big business.

"We wanted Drupal to be what Red Hat is to Linux, that's why we started Acquia.... I see us as being the next large open source business model to reach $1 billion in revenue, like Red Hat. We're on the IPO track — even though it's still early days, but we are getting ready."

To call Drupal 8 "enterprise-focused" is not controversial, especially if one believes that Dries and Acquia have any influence on Drupal. Drupal 8 will likely be a boon to large, for-profit companies, and Drupal will continue to attract companies that seek a robust, open source, enterprise content management system (CMS).

Nevertheless, Acquia is not the only large enterprise that affects the future of Drupal. When Dries announced Acquia's Large Scale Drupal (LSD) program, he began:

"Acquia works with many large enterprises that bet on Drupal. These organizations are doing amazing things with Drupal and innovating by breaking through prior limitations. However, in talking to our customers, we noticed that there is limited knowledge sharing and discussion happening among the heaviest Drupal users."

The LSD businesses conduct behind-closed-door meetings, share knowledge, decide what problems they want to solve, pay developers to create solutions, and eventually share those solutions with the broader Drupal community. As the LSD website tells us, these initial decisions are made by "key community leaders and developers as well as their peers at other leading organizations running Drupal." Following this process, the broader Drupal community receives these gifts, which it can then help grow. Dries wrote, "once contributed, anyone is welcome to discuss and assist the project." The advertised benefit of LSD, according to the Vice President of Large Scale Drupal at Acquia, is that we all get "significantly better software built by some of the most talented people in the community."

We are led to believe that LSD has the brains, the money, and the talent to make things happen efficiently. LSD reminds me of the early meetings of the open-source movement in 1998 that brought companies to gather in private and find ways to "monetize" the efforts of all contributors, as the New Yorker put it, "putatively in the name of progress and standardization." LSD might actually help solve what my colleague at Lullabot, Jeff Eaton, has called Drupal's "Platypus Problem," it's inexplicable, emergent complexity.

While Drupal clearly benefits Acquia and its large, enterprise clients, there is much more to this story. When we change the question to something like, "Is Drupal 8 built only by Acquia and its partners?" we get a very different answer: absolutely not.

Drupal is not Acquia. Acquia employs four of the six people that can actually push code changes to Drupal 8, but thousands of people submit patches for consideration. While we cannot know for sure — since we do not have that history of organizational commit credits — it seems very likely that the number of people contributing code to Drupal 8 that work for Acquia is much smaller than the number of people who have contributed at least one patch to Drupal 8 and do not work for Acquia.

By virtue of the fact that Dries created Drupal and co-founded Acquia, he gets the biggest megaphone. For example, I suspect that a lot more people will remember when Dries tweeted "Breaking news: out of the box, Drupal 8 is 2x to 200x as fast as Drupal 7 for anonymous users" than will remember the lengthy Twitter discussion that followed, suggesting flaws in the logic of his tweet. Dries, and his company, probably have the most power to shape messages about the essence of Drupal. But are they correct? Is Drupal actually "more about big sites"?

Drupal is for everyone

Larry Garfield has noted that Drupal 5, Drupal 6, Drupal 7, and Drupal 8 have all been accused of "leaving small sites behind." Larry also believes that it's "largely true, from a technical perspective," that Drupal 8 is more complex, but that for non-developers Drupal 8 is also "a huge leap forward." Larry is quite optimistic about the future of Drupal, for everyone.

So is the Drupal Association. Look no further than Drupal.org where the headline is "Drupal 8 Will Have Something for Everyone to Love." In spite of my opinion that Drupal 8 will be understood by many to be an "enterprise-friendly" CMS, it is not merely an enterprise CMS. I also agree with my fellow Drupal 8 configuration system co-maintainer, Alex Pott, when he writes, "Drupal is open-source software and I'm excited that enterprises, not-for-profits, schools, individuals and Captain Kirk can use Drupal 8." (I also find it laughable for me to be comparing my contributions to Drupal 8 to Alex's, but more on that later.)

The public criticism leveled at Drupal 8, and the responses to those criticisms, has tended to be only vaguely technical — the APIs keep changing, the configuration system does not work like the Features module, Drupal 8 is slow, small sites do not need web services, more object-oriented code favors professional developers, etc. As Drupal becomes more capable of tackling increasingly complex projects, certain individuals feel that it will become less capable of handling more simple projects. From my perspective, that logic is flawed.

Let's briefly consider the hugely complex topic of the Configuration Management Initiative (CMI). There is no configuration management system in Drupal 7, although we do have Features and Strongarm as contributed (non-core) Drupal modules. As I have said many times, the configuration system in Drupal 8 is not the Features module. There is an extremely complicated configuration management system in Drupal 8, and one of the biggest influences on the configuration system was the new translation system, which seeks to make Drupal more accessible to more people who speak more languages. As a result of the configuration management initiative, we eliminated the need for roughly 50 database tables from Drupal 7 to Drupal 8 because we standardized on one system. I believe that one extremely well-thought-out system is more useful than dozens of competing solutions within one CMS. It feels specious to argue that these changes somehow favor the enterprise over the so-called "little guys."

Nedjo Rogers disagrees with me, especially with regards to the configuration management staging model. He writes:

His point is well taken that smaller sites — and, by extension, the non-enterprise developers who build those sites — do not generally require a development >> staging >> production workflow in the same way that a large enterprise would. Where I disagree is that the mere presence of a configuration system will negatively affect smaller sites.

Consider Backdrop, a Drupal fork. Like Nedjo, the Backdrop developers seemingly reject the notion that "Drupal is for everyone." On the pages of Drupal Watchdog, Jen Lampton and Nate Haug (another colleague of mine at Lullabot) wrote:

"As Drupal moves itself closer to the Enterprise market, Backdrop CMS emerges to meet the needs of the little guys."

Backdrop includes the configuration system, albeit without the translation system. Backdrop caters to Drupal 7 developers by trying to be more like Drupal 7 than Drupal 8. "We like to think of Backdrop CMS as the next logical step in Drupal's evolution," they write. The Backdrop community has been pushing this message as much as they can, in as many places as allow. They want you to believe that Backdrop is, as the Backdrop website announces, "the comprehensive CMS for small to medium sized businesses and nonprofits." (For more about Backdrop, listen to my interview with Nate Haug.)

As much as I would like to see Backdrop succeed, I have my doubts. I do not see a compelling technical reason for small businesses and nonprofits to use Backdrop rather than Drupal. I feel like I'm in a somewhat unique position where I can root for both Drupal and Backdrop, and I look forward to seeing how many people maintain contrib modules for Drupal and Backdrop at the same time, how many clients ask for Backdrop, how many people try Backdrop on Pantheon, what sort of community develops around Backdrop, etc. If Backdrop does succeed, I don't think it will have much to do with Drupal's code being more suited for corporate interests.

Over the past few years, I have given a variety of presentationscoveringDrupal 8, doing my best to step back from the minutia and instead consider the broader picture. Each time I have reviewed the changes, I have felt that the majority of the new features would benefit everyone. Drupal 8 is more mobile friendly, includes a WYSIWYG editor, has views in core, supports HTML5 markup, and is more accessible. While I have a lot of ideas about these particular issues, most of them have been debated extensively, and I would instead like to take a step back to consider Drupal's identity from a less technical perspective.

Drupal is personal

This simplistic "big site" vs "small site" construction overlooks some fairly significant factors, such as the fact that for many people Drupal is personal. In technology, ontological exploration tends to be driven by discussions of code rather than cultural considerations. We're very good at asking questions like "Will Drupal 8 be slower?" or debating "Is it more user-friendly?" We are less good at asking what influences our understanding of a piece of software. We believe that individuals "come for the code, stay for the community," but rarely do we interrogate the individuals in the community (although there are exceptions).

To ignore the influence of these outside forces paints an incomplete picture. I don't think, for example, that it's a coincidence that Nedjo Rogers, one of the people who keeps asking good questions about how Drupal 8 will work for small sites and distributions, is also the lead developer for Open Outreach, a Drupal distribution for "small nonprofits and grassroots movements." He openly admits how he is conflicted, personally:

"Abstract questions of saving the world or working for capital accumulation of course translate into real-lived experience. For me personally, the tensions between free software and working for 'the man' are ones I feel every day."

"I've been working on Drupal for many years and Acquia is my company. I know how Open Source works, I know the Drupal community inside out, I know how companies should work with the community, and I have no intention whatsoever to destroy my own child."

If Dries understands Drupal as his "child," imagine how that must have felt for him to have other people take his child and decide to raise it as their own — such as the case with Backdrop.

Many others have made similar comments regarding the personal nature of Drupal. xjm, who works in the Office of the CTO (OCTO) at Acquia for Dries, wrote, "Core contribution was a life-changing experience for me." There is a touching picture of webchick, who also works in OCTO, hugging a giant Druplicon, and I fondly remember how her bio on Twitter and elsewhere used to read "Powered by Drupal." chx, one of the most prolific Drupal contributors ever, wrote a post in 2009 entitled "Why I love Drupal" and then five years later had to explain why he changed his avatar to a crying Druplicon. I know most of these people personally. I've sat next to some of them for hours and days writing code. Some of them have stayed at my house. I feel confident that Drupal is not something they do just a little bit.

xjm contends that her team in OCTO has worthwhile personal motivations, and she defends them on her blog:

"People are individuals, not automatons operating within a corporate machine. Even the six of us in OCTO contribute outside of paid time, to things that are not part of our jobs but that we care about. Gábor leads the multilingual initiative because he wants to make Drupal support all languages; Acquia did not prioritize it. Tim voluntarily works on core problems that bug him based on experience building sites and contributed modules, in addition to critical issues Acquia pays him to help fix. And Acquia doesn't own what I think or what I do with my spare time. So while we should recognize the influences organizations may exert over contribution, we should give individuals the credit for their own work."

What is perhaps more fascinating to outsiders is that the Drupal community is brimming with people about whom I could say similar things, such as many of the compassionate, dedicated people that I worked with, year after year, planning the Twin Cities DrupalCamp. There is likely a similar group of people in your community.

In free software communities, as Eric Raymond reminds us, "attacking the author rather than the code is not done" (90). The individuals offering these competing narratives about Drupal do so in good faith. While it is tempting to construct Drupal as a battle of big vs small, I think that's far too limiting to understand what's happening in the Drupal community. Much like other wildly-simplistic binaries such as FSF vs OSI, Microsoft vs GNU/Linux, or even more nuanced "oppositions," such as Debian vs Ubuntu, it's important to keep in mind that Drupal is personal without attacking individuals.

Some of us work on Drupal during the day, contribute to Drupal in our free time, and spend a lot of time teaching others about the wonders of Drupal. This might not change our code, but it helps me understand these competing constructions.

Drupal is not personal

It could be a threat to the Drupal community if it continues to become less personal.

By default, all of the code in Drupal core is accessible to everyone. The problem is not about accessibility of code, but rather about concerns regarding influence. It's ridiculous to think that Drupal could not be used, studied, modified, or distributed by any particular group. The problem, as I see it, is that Drupal is moving in a direction where people continue to fear what Benjamin Mako Hill has recently described as "access without empowerment." It is quicker and more efficient to discuss a new feature or a design decision in a video call than on IRC. A room with just the OCTO developers can make decisions much more quickly than battling it out in an issue queue. But I have already tried to establish that Acquia is not some faceless Mega Corp, insensitive to the needs of the Drupal community.

Even the most well-intentioned individuals can disturb a free software community. Gabriella Coleman, in her dissertation-turned-book, Coding Freedom, recounts a story about when some of Debian's key members got together at a conference and concluded that Debian should end its universal architecture support. The resulting email proposal suggested by these well-intentioned individuals caused a "monumental" crisis in the Debian community. Coleman examined thousands of emails in response to the proposal, in addition to IRC communications and blog posts, and found that "one of the most significant complaints, stated over and over, was about its tone" (153). The community must believe that they can influence the decision-making process. She concluded:

"What this event revealed is that Debian's implementation of meritocracy, like all meritocracies, is a fragile framework easily overtaken by the threat of corruptibility" (154).

The threat can be real or imagined, intentional or not. In this particular case, hundreds of Debian developers were left imagining "smoky backrooms" where the decisions were made without their input.

Dries wrote, "My motive is to do well and to do good. I admire organizations like Doctors Without Borders and strive to emulate them." I believe that Dries has done an excellent job of balancing Acquia's needs with the needs of the community. I believe he is trying "to do good."

I still have concerns about Acquia, the company. When Acquia has its IPO, it will no longer be able to act like "Doctors Without Borders" — it must answer to its investors. Then again, maybe the IPO will not change Acquia all that much given the large amount of funding that Acquia has already raised. Acquia, the company, is forcing the Drupal community to raise questions about the consequences of capital investment. It forces us to wonder if Drupal can continue to benefit both the global rich and global poor. Can a startup that aspires to be a billion dollar company avoid exerting undue influence on software that is used by both large, profit-driven enterprises and small, activist organizations?

What makes more sense to me is that some of the uneasiness with Drupal 8 stems from worries that by doing so much to make Drupal better, Acquia might be, in essence, limiting the flexibility of Drupal core, and that it is placing pressures on Drupal core that may have more to do with Acquia's own financial or organizational limitations than perceived needs of the broader Drupal community. We fear that Acquia could be making Drupal impersonal.

I first became really interested in free software after reading Glyn Moody's Rebel Code and Eric Raymond's The Cathedral and the Bazaar. Words like "rebel" and "bazaar" intrigued me. When I first started using Drupal professionally, at Wisconsin Public Radio, I was largely drawn to the software because of its much-discussed connection with nonprofit, academic, and governmental institutions. Drupal distributions such as CiviCRM, Open Outreach (maintained by Nedjo), and Open Media were all geared toward NGOs, nonprofit, activist, and grassroots organizations — groups that, in many circumstances, situate themselves in opposition to corporate interests. I get the feeling that many people who were originally drawn to Drupal because of its perceived status as nonprofit-friendly are now conflicted by the strong influence of business at the very core of Drupal.

If Drupal continues to be understood as more about business, it is potentially less about individuals, and more about the needs of the businesses for whom those individuals work. I cannot imagine, for instance, that Dries or Angie would commit code to Drupal core that they genuinely believed to be against the interest of their corporate partners.

Drupal is a community of volunteers

With the recent upswing in paid, full-time Drupal core developers, I'm concerned that Drupal risks attracting the necessary volunteers to continue its growth. Nowadays there are more people getting paid to work on Drupal full time than there were just a few years ago. If one believes that unpaid labor is unethical, this change is welcome. It might benefit certain individuals to have a day job working on Drupal rather than getting burned out working nights and weekends on Drupal. However, as the cultural construction of Drupal moves from something associated with weekend hacking and grassroots movements to something associated with corporations — or worse, a single corporation — then we potentially create other problems.

Certainly questions about Acquia's influence are not new. Objections to Acquia's position were raised, loudly, at least as early as 2007. It is a credit to Dries that as Acquia has continued to grow, the Drupal community has grown right alongside it, allowing more people to create meaning in code.

It's a well-known fact that programmers will contribute to projects for no reward other than to make the software more useful. When Acquia decides that it wants to change something about Drupal, has the votes, talent, and funding to make those changes a reality, it seems like it would necessarily limit the types of other contributions that others can make. Yet that does not seem to be the case.

Holly Ross, the Executive Director of the Drupal Association (DA), was recently discussing the D8 Accelerate project, which aims to put $250,000 toward accelerating the release of Drupal 8. She described this initiative as replacing an "underground economy" of companies paying individuals to work on Drupal core. She and the DA are definitely aware that putting money toward Drupal core development could undermine volunteerism. They hope to fund ideas, not individuals, that push Drupal 8 closer to release.

The DA wants to help keep developers focused on finishing the issues that will get Drupal 8 out the door. Dries, who is the president of the board of the DA, told the New York Times:

"Open source is Darwinian. Eventually the best idea wins, but it is much more wasteful. A regular company couldn't have experimented with creating 10 versions of an online photo album, then picked the best one."

Angie Byron, who is also on the board of the DA, wants to keep Drupal 8 development moving forward, and she's careful not to tell people what specifically to do. On Twitter she asked, "If you're a developer, and you aren't fixing #Drupal 8 critical bugs, care to share why? Curious what we can do to help momentum." She wants to keep things moving in a certain direction, but she knows she cannot simply tell people what to do.

There are pitfalls with picking solutions. For example, the creator of GNU/Linux, Linus Torvalds, famously described it as something that he does "just for fun." The GNU/Linux kernel is understood as beyond the control of any one company because many companies pay developers to work on it. There are lots of developers that are motivated to work on Drupal simply because they want to make it better. It's no longer just for fun to work on Drupal core if it is also for money.

Benjamin Mako Hill, a prolific developer and highly-regarded free software advocate, wrote about the influence of paid developers in volunteer-oriented free software communities in an article entitled, "Problems and Strategies in Financing Voluntary Free Software Projects." He recounts stories of numerous communities that were pulled apart by the mere presence of paid developers. Hill believes, "when it comes to voluntary work and paid labor, you can have one or the other but not both." He acknowledges the importance of volunteerism:

"Perhaps the most important benefit of volunteerism in free software development is institutional independence. Institutional independence in a free software project means that no company or organization has a monopoly on the ability to define specifications or to direct the project. To users and developers, institutional independence means that they get to define the specifications; it is a broad perceived autonomy. Projects that are driven or directed by volunteers are more easily able to appear institutionally independent than corporate or organizationally directed projects or any projects that incorporate paid labor."

Projects like Debian — which benefit from companies that pay employees to make Debian better — are viewed as institutionally independent because no one organization can direct the project.

Perspectives like these induce some level of worry for me about the future of Drupal. Perhaps more frightening is that I am personally affected. Since the recent growth of paid laborers in the Drupal community, I, personally, have felt less motivated to work on Drupal core because I know that others who are being paid to do the work will get it done, eventually, if I do not.

Don't misunderstand me: I love working on Drupal during the day and getting paid. I have my dream job and I love it. Maybe it's burnout. Maybe it's my upbringing. Maybe it's fear of smoky backrooms. It could be that I never really did that much anyway — that's certainly how I feel when I'm hanging around with people like Tim Plunkett or Alex Pott, both of whom worked tirelessly on Drupal core well before they were ever paid to work full time on core.

Drupal is the GPL

There are a many reasons for hope. One could point to Dries's ability to lead the Drupal community, the respect he is afforded, and the respect he extends. One could look to the ever-growing number of Drupal core contributors. More than any other factor, though, I have faith in the GNU General Public License, and its 30-year track record of making the world a better place.

If we believe that the fundamental reason for free software licenses is to deny anyone the right to exclusively exploit a work, then what programs like LSD are doing is still in the spirit of free software licenses because they eventually share their code, even if their methods ruffle some feathers.

Drupal, however it is constructed, has a powerful bodyguard keeping it safe. The meticulousness of the GPL ensures that Drupal will always be free of restrictions. It prevents any company that would desire to commercialize and profit from distributing derivative versions of Drupal that are proprietary. While the community can be threatened by commercial interests, the code will be free.

If factions of the community feel sufficiently disenfranchised, they can take the code and do as they please. Backdrop, the Drupal fork, was possible because the GPL explicitly allows forking. The GPL protects your right to tinker — in small ways and in really big ways.

Most importantly, the spirit of the GPL remains strong in the Drupal community, and in spite of my recent hesitation, I sense that many people want to play their part. For example, I have the pleasure to work everyday alongside another one of Drupal's most prolific contributors, Dave Reid. Recently, Dave and I were working on finding ways to process data more quickly. In the course of our work for a large enterprise, Dave created a method to process records concurrently rather than sequentially. Once we sorted out the bugs and got things working, Dave contributed the module back to the community. The Concurrent Queue module is fantastic, and anyone can use it. It's no less useful to someone who needs it because we did it while getting paid by a corporation. This kind of thing happens everyday, across the community.

We did not come up with these ideas on our own. We learned them from the Drupal community.

The Drupal community is not a special snowflake. The Drupal community is a group of individuals that build GPL software together. Drupal 8 will be free as in freedom. It's software that can be deeply personal. It's software that can benefit organizations. It's software that can benefit our enemies. The GPL ensures that everyone is free to use Drupal.

So as I see it, we have a multitude of competing cultural constructions of Drupal, none of which can claim to be correct. It's software for websites large and small. It's built by paid developers and volunteers. Many of us have a personal connection with Drupal. Because Dries brought us down this road — this joyride that is Drupal — with software licensed under the GPL, everyone is free to study, use, modify, and distribute any of the files on Drupal.org. And that has made all the difference.

]]>https://www.lullabot.com/articles/the-cultural-construction-of-drupal1698 at https://www.lullabot.comThu, 07 May 2015 18:00:00 GMTLullabot's annual party has become a DrupalCon tradition – fun friendly people hanging out and having a good time. If you're new to DrupalCon, it's a great place to meet people. If you're an old-timer like most of us, it's a great place to see old friends and make new ones.

Lullabot is sending 46 people to DrupalCon this year. Eleven of them are presenting sessions, so don't miss those. Also, both Lullabot and Drupalize.Me will be represented with booths (407 & 411) in the exhibit hall. We'll have our famous floppy disk party invites at the booth, so stop by early on Tuesday if you want to fill out your collection.

The venue for the party is just one block from Los Angeles Convention Center. So stop by on Wednesday evening, have a beer, and say "hello!"

]]>https://www.lullabot.com/articles/lullabots-7th-annual-drupalcon-party1699 at https://www.lullabot.comTue, 05 May 2015 18:33:55 GMTDrupalCon: Carwin Young, co-author of Front-End Fundamentals, will be giving a session at DrupalCon LA 2015 on The Why and How of Front End Architecture. Come by and say hi!

I started building websites, like many of us, as a back-end developer. I spent many delightful years developing with PHP and Drupal. However, the projects that I started working on in 2011 allowed me to begin playing with various front-end technologies. In early 2012, I successfully launched BracketCloud which was built with Backbone.js, Drupal and Node.js. Moving forward and eventually into 2013, my exposure to the front-end continued to grow. I worked on several projects that involved a lot of front-end development such as our very own Drupalize.Me and more recently the MSNBC site.

When I look back to the beginning of my journey to becoming a front-end developer, I wish that someone had been there to hand me a list of all the popular tools that I should probably experiment with or at least be aware of. Instead, I stumbled blindly through an overgrown forest of JavaScript plugins and frameworks, trying to figure out for myself how a front-end developer was supposed to code. That being said, I love teaching myself new things and I thoroughly enjoyed the experience.

One of my 2013 new year resolutions was to write a book on front-end development that would help people with the learning curve. There was a lot to consider as I began to draft out the book structure and I went through many iterations of the topics I wanted to cover. At our annual Design & Developer retreat here at Lullabot I talked with Carwin Young, our Senior Front-End Developer, about my progress and as he shared his thoughts it became obvious that his experience and knowledge would be invaluable to the book. We decided to co-author the book together and expand its scope. One year after conception, we are extremely pleased to announce the release of Front-End Fundamentals!

Front-End Fundamentals introduces the tools and fundamentals of front-end development practices and workflows. In the book we cover topics such as JavaScript frameworks, CSS styling, dependency management and task automation.

]]>https://www.lullabot.com/articles/frontend-web-development-fundamentals1674 at https://www.lullabot.comTue, 05 May 2015 18:00:47 GMTClutch is a research firm that analyzes and reviews software and professional services agencies, covering more than 500 companies in over 50 different markets. Like a Consumer Reports for the agency sector, they do independent research. They publish their results at Clutch.co. Recently, they reviewed Lullabot, interviewing our clients; they created a profile of Lullabot with the results. Lullabot received top marks across the board.

[image:{"fid":"2398","width":"full","border":false}]

In January, Clutch published a press release listing Lullabot first overall on its international list of web development agencies. We've always been very proud of our work, but it's really amazing to be recognized like this by an independent research firm. In March, Clutch sent out another press release that lists Lullabot as top in Boston-area web design and development agencies. We'll take it!

Since 2006, we've built an incredible team at Lullabot and I'd like to thank all of our employees for their contributions. We've also partnered with scores of magnificent clients over the years. We'd like to thank them all for their trust and collaboration. Of course, Clutch's listings are dynamic and ongoing. We can't sit back and expect to remain in the top position. We will continue to strive to be the best agency we can be, providing superlative results for our clients while continuing to provide a rewarding work environment for our talented team of expert developers, designers, and strategists.

]]>https://www.lullabot.com/articles/lullabot-named-top-development-and-design-agency1696 at https://www.lullabot.comFri, 01 May 2015 15:01:38 GMTFellow Lullabot Andrew Berry has written an article on why to decouple. If you do go this route, it’s because you’ve thought a lot about how to separate concerns. Content consumers are separated from content producers. Front-end developers are freed from the dictates of a back-end CMS. This article isn't about the separation of concerns, but rather what lies at the middle of all of these concerns— your HTTP API. In fact, you'll find that in a decoupled project the HTTP API provides not only the middleware, but also the middle ground. Let me explain.

The communication interface

One of the important things to understand is that an HTTP API is not just a way to expose your data or content from a CMS; nor is it a way to workaround the complexity of making views for the front end. An HTTP API is an interface, the clearly defined contract between two sides of a data transaction that takes into account the needs and limitations of both providers and consumers. Done right, information can flow freely from the content management system to the end-user through many channels. Done wrong, circular dependencies or poorly defined object schemas impinge upon the free flow of information.
To enable this “flow”, we need:

A common language—a vernacular—that allows every actor in the system a voice. This language should not be restricted to developers. Product teams, project managers, designers, stakeholders, and so on will influence decisions about the API. These decisions may come through the content model, which is built on the needs of the business. The people making these decisions might not be technical. You will want to re-use the nomenclature from the content model in your API, so a front-end developer, familiar with the API, can have a clear conversation with a non-technical business stakeholder.

The delegation of responsibilities. When collaborating across disciplines it’s easy to generate friction. Decisions about the API will affect who’s responsible for what, and all sides will have a claim. For instance, pagination requires a tricky balance between front-end and back-end performance. Make sure to think about these areas carefully so everyone knows what they’re responsible for.

Clear goals for the API. These goals can be written in such a way that they form the basis for test-driven development.

Expectations to be set. Good, human-readable API documentation means that your front-end team can begin work while the back-end implementation of the API is still in progress. This approach provides iterative feedback to the API developers. The API consumers get to try out the API as it's being developed and quickly discover any gaps or design bugs. Everyone wins!

Defining the middle ground

Collaborating on requirements with other human beings is usually the most difficult component of a project. Machine-oriented description languages (JSON, YAML, and so on) are not languages we can use to speak to each other. We’ve just talked about the importance of a vernacular, but how do we put that to practical use? In his article “API Best Practices: Spec-Driven Development”, Mike Stowe writes: “One of the quickest ways to kill your API is to define the API in your code, instead of coding to its definition.” Document and define your API first. There are tools to translate human-readable API documentation to their machine-friendly counterparts. Consider Apiary's Blueprint format, or, for a more technical approach (that also introduces testing), explore postman, Swagger, or RAML. Too complicated for your stakeholders? Consider the esperanto of computer science: spreadsheets.
The following caption shows an example of a Season resource being described using the Blueprint format.

That documentation is then rendered by Apiary in a more human-readable way.

Whatever approach you decide on, make that documentation—that definition—the system of record. That will prevent the troublesome spread of conflicting information via the ticketing system or your email inbox.

Having shared documentation among a big team will introduce the usual problems. Consider version control for versioning and diffing, communication channels for discussion and change notifications, and a permissions system that is flexible across each stage of the API design. Choose someone to set this all up. You will want someone to draft a first approach for everyone to build from. This person may then become the gatekeeper if you decide to have one. GitHub, BitBucket and other web-based user interfaces for Git are good solutions.

The inherent virtues

Building from an API places decisions in the middle ground, avoiding favoring one specific team while also compartmentalizing the decision scope. This minimizes the probability of the ripple effects of a decision disproportionately affecting the API implementation or its consumers.

An API, like a contract, shields both parties.

There are many reasons why you will want to build your next project to be API-centric. When it comes to what the needs are and how to communicate to all the stakeholders involved, you want everyone to have a common language to express their requirements. This enables us to use our documentation tools and communication interface to implement each requirement in the API. When making technical decisions, your entire team will be able to effectively communicate using the API as a natural interaction point, reducing conflicts and helping to get your project out the door.

]]>https://www.lullabot.com/articles/beyond-decoupling-the-inherent-virtues-of-an-api1638 at https://www.lullabot.comThu, 30 Apr 2015 17:00:00 GMTOne of the major topics of discussion in the Drupal community has been decoupled (or headless) Drupal. Depending on who you ask, it’s either the best way to build break-through user experiences, or nothing short of a pandemic. But what exactly is a decoupled architecture?

A decoupled content store splits the content of a website from how it is displayed into multiple independent systems. Decoupled sites are the logical evolution of splitting content from templates in current CMSs. Decoupled architectures started to become mainstream with the publication of NPR’s Create Once, Publish Everywhere (COPE) series of articles. Other media organizations including Netflix have seen great benefits from a decoupled approach to content.

Like many other solutions in computer science, decoupling is simply adding a layer of technical abstraction between what content producers create and what content consumers see.

Approaching the release of Drupal 8, technical decision makers face an important choice. When an existing site is upgraded to Drupal 8, how do we decide if we should decouple the site or not? Before we decide to work on a decoupled implementation, it’s critical that everyone, from developers and project managers, to content editors and business leaders, understand what decoupling is and how to ensure a decoupled effort is worth the technical risk.

Why Decouple?

I’ve seen many people jump to the conclusion that decoupling will solve problems unrelated to a decoupled architecture. Decoupling doesn’t mean a website will have a cleaner content model or a responsive design. Those are separate (though relevant) solutions for separate problem sets.

These are the specific advantages of a decoupled architecture for a large organization:

Clean APIs for mobile apps: Since the website front-end is consuming the same APIs as mobile apps, app developers know that they aren’t a second-tier audience.

Independent upgrades: When the content API is decoupled from the front-end, the visual design of a website can be completely rebuilt without back-end changes. Likewise, the back-end systems can be rebuilt without requiring any front-end changes. This is a significant advantage in reducing the risk of replatforming projects, but requires strict attention to be paid to the design of the content APIs.

APIs can grow to multiple, independent consumers: New mobile apps can be created without requiring deep access to the back-end content stores. APIs can be documented and made available to third parties or the public at large with little effort.

Less reliance on Drupal specialists: Drupal is a unique system in that front-end developers need to have relatively deep understanding of the back-end architecture to be effective. By defining a clear line between back-end and front-end programming, we broaden our pool of potential developers.

Abstraction and constraints reduce individual responsibilities while promoting content reuse: Content producers are freed from needing to worry about exact presentation on every single front-end that consumes content. Style and layout tweaks are solely the responsibility of each front-end. Meanwhile, front-end developers can trust the semantics of content fields and the relationships between content as determined by the content experts themselves.

Here Be Dragons

At the beginning of a decoupled project, the implementation will seem simple and straight-forward. But don’t be fooled! Decoupled architectures enable flexibility at the cost of simplicity. They aren’t without risk.

One system becomes a web of systems: A decoupled architecture is more complex to understand and debug. Figuring out why something is broken isn’t just solving the bug, but sorting out whether the problem lies in the request or in the API itself.

Strict separation of concerns is required to gain tangible benefits: As front-end applications grow and change, care has to be taken to ensure that front-end display logic isn’t encoded in the API. Otherwise, decoupled systems can slowly create circular dependencies. This leads to systems with all of the overhead of a decoupled architecture and none of the benefits.

Drupal out-of-the-box functionality only works for the back-end: Many contributed modules provide pre-built functionality we rely on for Drupal site builds. For example, the Facebook module provides access to the Facebook API and to Facebook login widgets. In a decoupled architecture, this functionality must be rewritten. Site preview (or even authenticated viewing of content) has to be built from scratch in every front-end, instead of using the features we get for free with Drupal. Need UI localization? Get ready for some custom code. Drupal has solved a lot of problems over the course of its evolution so you don’t have to—unless you decouple.

The minimum team size is higher for efficient development: A Drupal site with a small development team is not a good candidate for decoupling unless content is feeding a large number of other applications. In general, decoupling allows larger teams to work concurrently and more efficiently, but doesn't reduce the total implementation effort.

Abstraction and constraints affect the whole business: The wider web publishing industry still has the legacy of the "webmaster". Editors are used to being able to tweak content with snippets of CSS or JavaScript. Product stakeholders often view products as a unified front-end and back-end, so getting the funding to invest in building excellent content APIs is an uphill battle. Post-launch support of decoupled products can lead to short-term fixes that are tightly coupled, negating the original investment in the first place.

The Heuristic

To help identify when decoupling is a good fit for a client, Lullabot uses the following guidelines.

Decoupled architectures may be appropriate when:

The front-end teams require full freedom to structure and display the data.

The front-end team does not have Drupal expertise.

More than one content consumer (such as a website and multiple mobile apps) is live at the same time.

If a project meets some of these criteria, then we’ll begin a deep-dive into what decoupling would require.

Does decoupling also require a complete content rewrite, such as when migrating from legacy "full-page" CMS’s? We’ve encountered sites that haven’t made the move to structured data yet and still consist primarily of HTML “blobs.” This scenario presents a significant hurdle to decoupling, though it’s a separate problem from decoupling.

Does the development team have the time needed to build and document a content API with something like Apiary? Or is using Drupal as a site building (but coupled) development framework a better fit?

Does the web team consist primarily of Drupal developers, and will those developers continue to support the website in the future? Would the front-end team be better served by Views, Panels and the theme layer, or by a pure front-end solution like React or Angular?

Is there enough value in decoupling that the business is willing to change how they work to see it’s benefits?

Decoupled architectures are a great solution - but they’re not the only solution. Some of the best websites are built with a completely coupled Drupal implementation. It’s up to us as technical leaders and consultants to ensure we don’t let our excitement over an updated architecture get in between us and what a client truly needs.

]]>https://www.lullabot.com/articles/should-you-decouple1695 at https://www.lullabot.comWed, 29 Apr 2015 18:00:00 GMTOver the past few months I have been banging my head against a problem at MSNBC: importing the site's extremely large database to my local environment took more than two hours. With a fast internet connection, the database could be downloaded in a matter of minutes, but importing it for testing still took far too long. Ugh!

In this article I'll walk through the troubleshooting process I used to improve things, and the approaches I tried — eventually optimizing the several-hour import to a mere 10-15 minutes.

The starting point

This was the set up when I started working on the problem:

The development server has a 5GB database with 650 tables with a total of 27,078,694 rows.

Wall-clock time was two hours, and it was causing a lot of frustration within the team. Having such a slow process was a big limitation when someone wanted to peer review pull requests. To avoid the delay, people were only updating their local database once a week instead of every couple days — making regressions much more likely when exporting configuration from the database into code.

Analyzing the database

My first attempt at a solution was reducing the content in the development database. If the size of the database could be reduced to about 1GB, then the import process would be fast enough. Therefore, I inspected which tables were the biggest and started analyzing what I could trim:

field_body_revision was the biggest table in the database. There were many revisions, and many of them had a considerable amount of HTML. I thought about simply trimming this table and field_body, but doing it without breaking HTML would be tricky.

field_revision_field_theplatform_transcript was the second biggest table. I looked at the source code to see how it was being used and asked the development team if it was needed for development and found out that I could trim this value without damaging the development experience. An easy win!

Fields used on the homepage had tons of revisions. One reason was heavy use of the Field Collection module on a multi-value Entity Reference field. Each node save created a cascade of new revisions that were good candidates for removal.

Trimming long tables

All of this was promising, and I set up a standard process for slimming down our development databases. Every night, the production database is copied into the development environment. MSNBC is hosted at Acquia; which offers a set of hooks to operate on environment operations such as copying a database, files or deploying a release. I wanted to trim down the size of the table field_revision_field_theplatform_transcript so I added the following statements to the post-db-copy hook for the development environment:

The above queries trim a field of a couple tables which contains very long strings of plain text. Those steps consistently reduce the database size by 1GB. Here is a sample output when importing the development database into my local environment after this change was made:

That reduced the total import time to an hour and forty five minutes. It was a step forward, but we still had more work to do. The next thing to try was slimming down revision data.

Cutting down revisions

Some nodes in MSNBC’s database had hundreds of revisions. developers and editors don’t need all of them but content is gold, so we can’t just go wipe them out. However, If we could cut down the number of revisions in development, then the database size would go down considerably.

I looked for modules at Drupal.org that could help me to accomplish this task and found Node Revision Delete. It certainly looked promising, but I realized that I had to put a bit of work into it so it could delete a large amount of revisions in one go. I added a Drush command to Node Revision Delete which used Batch API so it could run over a long period of time deleting old revisions. When I tested the command locally to keep just the last 10 revisions of articles, it ran for hours. The problem is that the function node_revision_delete() triggers several expensive hooks, and slows the process down quite a bit.

This made me to look at the production database. Did we need that many revisions? I asked the editorial team at MSNBC and we got confirmation that we could stop revisioning some content types. This was great news, as it would slow the database's future growth as well. I went one step further and configured Node Revision Delete so it would delete old revisions of content on every cron run. Unfortunately, our testing missed a bug in Field Collection module: deleting a revision would delete an entire field collection item. This was one of the most stressful bugs I have ever dealt with: it showed up on production and was deleting fresh content every minute. Lesson learned: be careful with any logic that deletes content!

Because of the concerns about lost content, and the fact that Node Revision Delete was still slow to do its work, we uninstalled the module and restored the deleted data. Reducing the number of revisioned content types would slow its growth, but we wouldn't try to remove historical revisions for for now.

Deleting entire nodes from the development database

Our next idea was deleting old nodes on a development environment, and sharing that slimmed down database with other developers. After testing, I found out that I could delete articles and videos published before 2013 while still leaving plenty of content for testing. To test it, I wrote a Drush script that picked a list of nids and used Batch API to delete them. Unfortunately, this was still too slow to help much. Each call to node_delete() took around 3 seconds. With hundreds of thousands of nodes this was not a valid option either.

At this point, I was out of ideas. Throughout this effort, I had been sharing my progress with other developers at Lullabot through Yammer. Some folks like Andrew Berry and Mateu Aguiló suggested I take a look at MySQL parallel, a set of scripts that break up a database into a set of SQL files (one per table) and import them in parallel using the GNU Parallel project. Several Lullabot developers were using it for the NBC.com project, which also had a database in the 5-6 GB range, and it looked promising.

Importing tables via parallel processing

Mateu showed me how they were using this tool at NBC TV, and it gave me an idea: I could write a wrapper for these scripts in Drush, allowing the other members of the development team to use it without as much setup. Coincidentally, that week Dave Reid shared one of his latest creations on one of Lullabot's internal Show & Tell talks: Concurrent Queue. While examining that project's code, I discovered that deep down in its guts, Drush has a mechanism to run concurrent processes in parallel. Eureka! I now had a feasible plan: a new Drush command that would either use GNU Parallel to import MySQL tables, or fall back to Drush’s drush_invoke_concurrent(), to speed up the import process.

The result of that work was SyncDb, a Drupal project containing two commands:

drush dumpdb extracts a database into separate SQL files; one per table. Structure tables would be exported into a single file called structure.sql.

drush syncdb downloads SQL files and imports them in parallel. It detects if GNU Parallel is available to import these tables using as much CPU as possible. If GNU Parallel is not available, it falls back to drush_invoke_concurrent(), which spins up a sub-process per table import.

Here is a sample output when running drush syncdb:

juampy@juampy-box: $ time drush syncdb @msnbc.dev
You will destroy data in msnbc and replace with data from someserver/msnbcdev.
Do you really want to continue? (y/n): y
Command dispatch completed. [ok]
real 13m10.877s

13 minutes! I could not believe it. I asked a few folks at MSNBC to test it and the time averaged from 12 to 20 minutes. This was a big relief: although trimming content helped, making better use of CPU resources was the big breakthrough. Here is a screenshot of my laptop while running the job. Note that all CPUs are working, and there are 8 concurrent threads working on the import:

[image:{"fid":"2368","width":"full","border":false}]

Next steps and conclusion

There is still a chance to optimize the process even more. In the future, I'll be looking into several potential improvements:

GNU Parallel has many options to make an even better use of your CPU resources. Andrew Berry told me that we could try using the xargs command, which supports parallel processing as well and is available by default in all *nix systems.

Get back to the root of the problem and see if we can reduce production’s or development’s database size.

Upgrade to Drush 7 in the server. Drush 7 removed the option --skip-add-locks when dumping a database, which speeds up imports considerably.

Does importing the database take a long time in your project? Then have a look at the approaches I covered in this article. Optimizing these time consuming development steps can vary widely from project to project, so I am sure there are many more ways to solve the problem. I look forward for your feedback!

]]>https://www.lullabot.com/articles/importing-huge-databases-faster1683 at https://www.lullabot.comThu, 23 Apr 2015 16:00:00 GMTWhen we started with the MSNBC project, my colleague, Jerad Bitner, established a process that each ticket would be implemented in a Git branch and a pull request would then be created for someone on the team to review. I had done a bit of peer reviewing in the past, but this experience was totally different.

In GitHub, when you are added to a repository, you get notifications for all issues and pull requests that are created, unless you change a setting. I don’t know why, but every morning while catching up on email, I started looking at all of them. I realized that apart from learning from everyone’s code and discussions I was gaining a feeling of safety by creating a mental picture of the site. This gave me a tremendous amount of confidence to chime in on pull requests, suggesting small improvements that helped with the quality and consistency of the entire codebase.

Even in times when there was a lot to do, I kept looking at pull requests. I had to. I felt that if I stopped doing it I would end up duplicating code or my code wouldn't follow the same approach taken in other areas of the codebase. In this article we will see tips and examples on how to do peer reviews within a team.

Improving code together

Peer reviewing has become a habit for me and GitHub’s pull requests made this process easy and fun. They make the code visible for everyone to discuss it and they are a great chance to sneak in small improvements in the surrounding code like in the following example:

[image:{"fid":"2360","width":"full","border":false}]

The code above is correct and meets the requirements of the related ticket but here we are suggesting using a function from our codebase that achieves the same in a simpler way. Yay for standardization!

Keeping the pace while reviewing

Discussions on a peer review must be as efficient as possible or they would slow down progress. As soon as we see that we are taking too long in a discussion we would create a follow up ticket and merge what was implemented in that pull request. Here is an example:

[image:{"fid":"2358","width":"full","border":false}]

By creating follow ups we create an opportunity to rethink what was discussed and create a new pull request to test it and potentially merge the improvement. It keeps both project managers and developers happy as the project keeps evolving at the right pace.

Tone to use in peer reviews

The tone is very important in peer reviews. Being positive, constructive and vulnerable are key attributes to connect with someone’s work. Here is an example of typical suggestions and their feedback:

[image:{"fid":"2361","width":"full","border":false}]

Note the use of words like please or could. At the end, we are just making suggestions for the code to improve. Even when we find bugs in the code or missing functionality, keeping a constructive tone in our comments encourages team members to improve instead of provoking a defensive response. Some Open Source projects like Drupal reflect how its members should communicate in a Code of Conduct.

There is always time for a laugh

There are times when making a joke of how we feel about something may be the key to break the pressure of a deadline and inject good energy. Note that we should take a lot of care on when to do it and whether the author of the pull request has a sense of humor that connect with ours. Here is an example where it worked great:

[image:{"fid":"2359","width":"full","border":false}]

Responsibility is shared among the team

By contributing to someone’s pull request with a peer review the team gains a feeling of shared responsibility over the code. It will be much harder for a bug to sneak in if there are more eyes looking at the code before it is merged into the main branch. Moreover, if there is an issue with the code there will be two or three people who can look into it and they will back each other up to fix it. Here is an example of a bug caught by peer review that never made it to production:

[image:{"fid":"2357","width":"full","border":false}]

Not everyone likes to peer review though and this is fine. You can’t expect the whole team to review everyone else’s pull requests. However, having at least a couple people doing it makes a big difference. It does not need to be the senior back end and front end developers: less experienced folks will ask different questions which are valuable feedback too.

Getting other roles involved

In order to get project managers, external teams and the client into peer reviewing, we integrate the GitHub repository with Jenkins to spin up a testing environment for each pull request. We can see this in action in the following screenshot where we not only get a link to the testing environment but we also see results of end to end tests: [image:{"fid":"2362","width":"full","border":false}]

This tool affords the opportunity to test something before it gets merged into the main branch for people who do not have a development environment. It saves us a lot of time to obtain feedback.

Making code easier to read

Peer reviewing starts by reading code that someone has written. If the code is easy to read, then we can go to the next step (testing it). If the code can’t be read easily, then we ask questions. A few days ago I read this quote from the book The Pragmatic Programmer:

Remember that you (and others after you) will be reading the code many hundreds of times, but only writing it a few times.

This quote really struck a chord with me. Our code will be read by many others (even ourselves) and therefore it has to be as clear as possible. The term clear has to be agreed within the team. Personally, I follow these steps before I ask for a peer review:

I check that the code is clear and meets our coding standards.

I check that I have commented my code enough and that comments are well written.

I check that the commit messages match with the changes in code.

Finally, I create a pull request and add testing instructions at the top for someone else to peer review it.

The main point of the above list is that I want my code to be tested and merged quickly. Therefore, the easier I make it for the peer reviewer, the better chance I have to get it into the main branch and resolve the related ticket.

It’s a learning experience for everyone

Being vulnerable while peer reviewing has the benefit of learning. I have learnt a lot of things while doing peer reviews just because I asked about a particular line. For example, I had no idea of what $(function () was for in JavaScript. This is how I learned about it:

[image:{"fid":"2363","width":"full","border":false}]

And there is more: when I was convinced that the function empty() was bulletproof, I stumbled upon this:

[image:{"fid":"2364","width":"full","border":false}]

The peer review checklist

Here are the guidelines that we used at the MSNBC team. If you want to try peer reviews in your team, you should get together and agree on a list of steps like the following ones:

Does the code follow our Coding Standards?

Is the code well documented?

Are there testing instructions on the ticket?

Does the code rely on our existing APIs?

Are all comments in the pull request resolved or discussed?

Does this pull request need any follow up tickets? Are they created?

If there is a testing environment, do all tests pass?

Are the requirements of the ticket met in your local environment or in a testing environment?

Are there database updates? Do they complete without any errors/warnings?

Are there any changes in Features components? Do they revert as expected?

Look back at the requirements — are they met in an efficient manner? Would you have architected the changes differently?

Conclusion

My aim with this article was to share how beneficial it has been for me to do peer reviews. Do you think that this applies to your project? Try it out with your team for a couple weeks and share your thoughts here with us. I am sure that you will get something good out of it.

I take the chance to thank my colleague, Jerad Bitner for getting the process into the team and James Sansbury for teaching me a lot about tone and thoroughness. Also, Matt Oliveira and Sean Lange provided great feedback while reviewing this article. Thanks also to all the bots who participated in the BoF about peer reviewing at the Lullabot retreat.

]]>https://www.lullabot.com/articles/the-peer-review-howto-guide1679 at https://www.lullabot.comWed, 15 Apr 2015 18:00:00 GMTIf you’ve been following web development over the past few years, you will no doubt have noticed that JavaScript frameworks are an increasingly popular way to build web applications. Although there are many frameworks out there, four of them stand out: Backbone, AngularJS, Ember and React.

Perhaps you’ve had a chance to experiment with one or two of these frameworks, but are still a little unsure about the best one to commit to mastering. Web development, particularly front-end development, has been moving at a blistering pace and there is constant pressure to add valuable new skills to your repertoire.

Developers have to make tough choices about what they are going to focus on, and the idea of spending months learning a new framework that ultimately doesn’t pay off isn’t very appealing, not to mention if the result is a production application you are then committed to maintaining.

What follows is a high level look at these frameworks. It’s the result of my experience with them over the past couple of years. Some of them I’ve dug into more deeply than others, but I tried to understand the benefits and concerns with each so that I could make an informed choice about which has the most to offer.

The View from 35,000 Feet

Before we discuss the individual frameworks, let’s take a moment to get a handle on the big picture. Why would we want to use one of these frameworks in the first place? Why not just stick with a traditional server-side application?

The very short answer is that they offer a more responsive user experience. For example, when a user clicks a button, instead of waiting for the entire page to reload as in a traditional server-side web application, JavaScript frameworks only load in portions of the page as the user interacts with them, thus speeding up the responsiveness of the user interface. It can have the effect of creating a UI that feels as snappy as a native mobile app.

Of course, you can do similar things with a traditional app and a bit of jQuery, but if you’ve ever tried that then you know how quickly you can run into trouble. Unless you’re dealing with something simple, code management with jQuery quickly becomes a challenge, often leading to “spaghetti” code.

Modern JavaScript frameworks offer a way around the problem of code management by providing well-defined application architectures (often using the MVC design pattern, which jQuery lacks) that can greatly ease development. So in using one of these frameworks, we get highly responsive user interfaces along with well structured and maintainable code, which can be a huge time saver in the long run.

Now that we have a general idea of why these frameworks are getting so much attention, let’s look at each of them in turn and see what they have to offer.

Backbone.js

As one of the oldest of the JavaScript frameworks in this review, Backbone has lost much of its initial buzz, but that shouldn’t dissuade you from giving it serious consideration.

First released in 2010 by Jeremy Ashkenas, Backbone is lightweight. Coming in at just 6.3KB when minified and compressed for production and with only one dependency (Underscore.js), it’s a highly versatile and minimalistic MVC (Model-View-Controller) framework that powers a lot of sites you may be familiar with: Twitter, Hulu, Pinterest and my personal favorite, Pandora Radio.

Benefits

Aside from the compact file size, Backbone’s great strength is its versatility. Unlike the other frameworks, Backbone isn’t highly opinionated. For example, it doesn’t come with it’s own templating engine (aside from the basic one included in Underscore), leaving developers to choose whatever works best for them on a given project.

Because it’s so lightweight, Backbone shines brightest when used in simpler projects where speed is a priority - think single page apps (Twitter, Pinterest) or a widget that is part of a traditional web application.

Concerns

Backbone may be better suited for more advanced JavaScript developers. It’s minimalism can be both a blessing and a challenge depending on the experience level of the developer working with it. There are a lot of different libraries and plugins that can be mixed and matched when building a Backbone application, and while many developers love this extensibility, it can be challenging for newcomers.

There are also concerns about the need for excessive “boilerplate” code in order to use the framework successfully in a project. These complaints are often dismissed by more experienced developers who suggest those who are writing a lot of boilerplate on each project aren’t using Backbone correctly.

Another significant concern—not unique to Backbone by any means—is a lack of server-side rendering. For a more in-depth discussion of this, I recommend reading Tim Kadlec’s excellent post on the topic. He writes, “if your client-side MVC framework does not support server-side rendering, that is a bug. It cripples performance.”

I agree with Kadlec on this and I would add that another concern about a lack of server-side rendering is SEO. Most search engine bots are unable to parse JavaScript, and when they happen upon a site that is using a framework that doesn’t support server-side rendering, they don’t receive the content. There are workarounds to the problem, but it’s definitely something to keep in mind. Few projects can afford to lose large amounts of organic search traffic.

Bottom line

Backbone is a framework that shines brightest in the hands of experienced developers building single page applications (SPAs) and widgets. If you’re interested in using it beyond that scope, be sure to look into options for server-side rendering and do research on additional libraries and plugins you may need in order to build your application.

AngularJS

It may surprise you to learn that AngularJS is actually an older framework than Backbone, first released in 2009 by Brat Tech LLC. But the reason Angular is typically seen as the more recent entry in the field is that it didn’t take off until it came under Google’s patronage. A quick look at Google Trends show Angular (the red line in the graph below) gaining traction in early 2012 and exploding in popularity over the next few years.

[image:{"fid":"2389","width":"full","border":false}]

Of course this graph doesn’t tell the whole story of the relative popularity of these frameworks (it greatly undercounts React, for example), but it does give an idea of the level of attention Angular has received. This prominence has led to a robust community of contributors, but also a lot of pointed criticism that Angular fails to live up to the hype surrounding it.

Coming in at 36KB when minified and zipped for production, Angular is often called a Model-View-Whatever framework because it doesn’t adhere to the typical MVC design pattern. It’s beefier than Backbone, but you also get a lot more built-in functionality, as we’ll see below. Some notable sites built with AngularJS include: VEVO, The Weather Channel and MSNBC.

Benefits

Two-way data binding is a much loved feature of AngularJS that describes the condition where data is bound to an HTML element in the View and that element has the ability to both update and display that data. In Angular, both the Model and the View can update the data, thus the “two-way” descriptor. Angular’s implementation of this form of data binding allows for a reduction in the amount of code required to create dynamic views.

Another popular feature is directives, which allow developers to extend HTML by attaching special behaviors to parts of the DOM. For example, ng-repeat is a directive that allows developers to repeat an element, making it very handy for doing things like printing an array of items to the page. In addition to the directives that come with Angular, you can create your own, allowing for a great deal of flexibility in crafting behaviors for the UI.

Dependency injection is another great feature of Angular that allows developers to easily include services in their modules. For example, if a developer is writing a function and wants to use the $location service (a service that parses the URL in the browser address bar), all that’s required is to include it as a parameter of the function and Angular will make sure that an instance of that service is available to that function. This is also useful for injecting mock data into components which is a feature that helps make Angular highly testable.

Finally, having Google as a sponsor helps. Many enterprises (and developers) have taken the plunge on Angular because they have seen Google’s involvement as a proxy for stability, which is often a critical consideration.

Concerns

Two-way data binding! Yes, two-way data binding makes both the list of benefits and concerns. Although it does make it easier to build with Angular, people have criticized the two-way data binding because it complicates debugging and hurts performance.

The fact that Angular can be slow, particularly in larger, more complex apps, undermines a major reason for using a JavaScript framework in the first place. Sluggish performance is typically seen when an application implements a complex UI, although in skilled hands this challenge can be mostly overcome.

Also of concern, the upcoming Angular 2 will be a complete rewrite of the framework—no backwards compatibility. Many see this as a tacit acknowledgement on the part of the Angular team that the initial approach was flawed. It also may have undermined the perceived stability of the project among enterprises.

For a very useful and informative critique of Angular, I recommend this post by Peter-Paul Koch. It provides a lot of context and detailed analysis that is outside the scope of our present discussion.

One final note, Angular, like Backbone, lacks server-side rendering, though workarounds exist.

Bottom line

Angular works in a wide range of use cases from small projects to enterprise applications. Nevertheless, if you’re planning a large, complex application, having skilled developers on hand who can tackle any performance issues that arise, is critical.

Given the transition from version 1 to version 2, Angular doesn’t seem like a great choice at this point in time. Learning the current version of Angular may only position a developer to work on legacy applications and still require learning the upcoming version as well. Tall order! Whether it will be a good choice to take up Angular 2 remains to be seen.

Ember

Ember bills itself as, “A framework for creating ambitious web applications”, and it has an interesting pedigree. It was created in late 2011 by Yehuda Katz, who is also a member of the jQuery and Ruby on Rails core teams.

Just as AngularJS feels like a framework written by Java developers (because it was), Ember is often said to be reminiscent of Rails, no doubt due to Katz’ intimacy with that project.

Ember, quite proudly, does not have a corporate sponsor. It is built by a robust community of developers “scratching their own itch”.

Coming in at 95KB minified and compressed for production, it is one of the heftiest of the four frameworks under discussion (jQuery and Handlebars are required dependencies that will add to that total). But with the extra size, you get a lot of built-in functionality. Websites built with Ember include: Qualcomm, Chicago.com, Nest, Vine and NBC News.

Benefits

There is a general principle within the Ember community: “convention over configuration”. When working with Ember, you should do things the “Ember way”. Pretty much everything you need to write a web app is built-in, including a templating library, routing, and tons of other things that are intended to free developers from routine and mundane reinvent-the-wheel tasks, allowing them to focus on the larger problems that are unique to their project.

One interesting thing about Ember is theEmber CLI. Although it’s not required in order to work with Ember, it’s a useful command line tool that handles a lot of things people commonly use Grunt or Gulp to do—compile Sass and minify CSS and JS, for example. Maybe you don’t want to mess with your current build system, but if you don’t have one in place, this is a tool that can get you started with minimal hassle.

The other thing that many developers seem to love about Ember is the fact that it doesn’t have a corporate patron and that the team behind it is deeply committed to open source software. There are some developers who look at the corporate sponsorship of other frameworks a bit warily, so this commitment from the Ember team will be a factor for those folks.

Concerns

The biggest concern with Ember is the need to do things the “Ember way”. Now, to some extent, this is similar to Angular, but Ember takes it further. Ember aspires to provide a complete solution, soup to nuts. It’s certainly a marked contrast with Backbone that allows you to mix and match things more or less as you see fit.

There is also a lot of generated code with this approach that can lead to difficulty with developers understanding exactly what’s going on. When you have a large framework, with so much built-in functionality, the learning curve is steep.

Ember also uses two-way data binding, although it uses a different implementation than Angular. Perhaps in response to community concerns, the Ember team has announced they will be moving away from two-way data binding in the future.

Bottom line

Ember is a good framework and in areas of weakness, there seems to be a lot of effort to improve. For example, like Angular and Backbone, it doesn’t support server-side rendering, but it has been announced that it will have that support soon, which is great. Overall, I think Ember may be best suited for teams working on medium to large projects. It is highly opinionated, so if you like to roll your own, it may not be for you.

React

The new kid on the block is also currently the hottest, stealing much of Angular’s buzz. Released in 2013 by Facebook, it takes a different approach from the other frameworks we’ve been discussing.

Backbone, Angular and Ember are often referred to as client-side MVC frameworks. That doesn’t accurately describe React, however. Facebook says that React is more of the V in MVC—the View part. The rest of the pattern is flexible with React, but Facebook uses theFlux architecture to fill it out, which is a pattern best suited for larger applications. As a practical matter, plain old React will do just fine for a lot of applications.

It comes in at 120KB when minified and compressed for production, making it the largest framework in terms of file size, although it doesn’t have any required dependencies. Sites using React include: Facebook, Instagram (basically one large React app), Flipboard, BBC and Netflix.

Benefits

It’s very fast - the fastest of the bunch. If you’re interested in where React gets its speed, I recommend learning more about its implementation of a virtual DOM and synthetic events.

Another thing developers love about React is that it’s easy to learn. Angular and Ember both have a lot of what’s called “domain specific language.” It’s part of what creates the relatively steep learning curves for those frameworks. React has significantly less of this, making it easier for developers with JavaScript experience to get a handle on.

React has a component-based approach that will come quite naturally to those familiar with creating CommonJS modules. Each React component represents a part of the UI - a form element, a page title, etc. These components can then be mixed and matched thus allowing for maximum code reuse.

One very cool benefit of React is that you can use it to create mobile applications. That’s right, React Native allows developers to learn React once, and use it to write both web and iOS apps (Android is coming soon). To me, this is a truly amazing feature.

React also supports server-side rendering. This goes a very long way to solving the performance and SEO problems that have plagued client-side frameworks since their inception. Once the initial page load of the server rendered content has occurred, React on the client-side takes over and updates the UI based on user interaction. Having that first render done on the server eliminates the risk that search engines won’t be able to index the site while also providing faster page loads.

Concerns

The most controversial aspect of React is the absence of templates and the use of Components to generate the UI. What this essentially means is that your HTML lives inside your JavaScript.

For a lot of developers this just seems…wrong. At least initially. Many come around when they realize that everything concerning a given component is located in a single place. The advantages then become clear - easier debugging, code reuse and separation of concerns. To help get your head around the React approach, Facebook has put together a nice post that walks you through it.

Bottom line

React delivers on the hype. Pretty much any size project can successfully use React and it excels in delivering on all the key things we look for in a framework while addressing the most frequent criticisms. React Native also looks like a game changer.

Final Thoughts

Perhaps you may be able to guess that I view React as the strongest framework, at least for now. I’m impressed. That said, all of them are good. That’s why they’ve risen to the top of a crowded field.

JavaScript is becoming a much more important part of web development and is key to innovative approaches like decoupled—or “headless”—Drupal, an application architecture that the team here at Lullabot has helped pioneer.

Ultimately, the framework you choose to invest in will come down to personal preference and the type of projects you or your team want to work on. However, if you’re a developer - particularly a front-end developer - I recommend taking the plunge and gaining expertise with one of them.

]]>https://www.lullabot.com/articles/choosing-the-right-javascript-framework-for-the-job1690 at https://www.lullabot.comThu, 09 Apr 2015 18:00:00 GMTThis is the final article in my series on being a new Lullabot, where I focus on what it’s like when we get together in person. Catch up with Part 1 and Part 2 to learn more about what it’s like working apart.

Coming Together For Projects

As I mentioned in Part 2, my first week was spent on-site with my project team in Atlanta, working alongside designers and developers. It's funny, I’ve dreamed of working remotely on a distributed team, yet I cherish these “face time” days together most.

[image:{"fid":"2382","width":"full","border":false}]

With scheduled virtual meetings, it’s easy to focus on the task at hand and forget to enjoy each others’ company. Meeting face-to-face fills in the blanks. I see expressions and mannerisms. And it’s just so fun! While sketching ideas for the new GRAMMY.com side-by-side at a huge conference table in our Atlanta hotel, we cracked jokes, shared music, and fed off each other's energy. Our sketch sessions were inspired.

Each night brought opportunities to bond as a team. I left with a new love of sour beers—Duchesse de Bourgogne in particular—and a backlog of design apps bookmarked or downloading on my iPhone.

The Team Retreat

Various Lullabots connect in person several times a year, between onsite meetings for client projects, conferences, and other organized events. Once a year, however, the whole company puts client work on hold for a week to be together. This year, we went to Smoke Tree Ranch in Palm Springs. I remember a co-worker saying it "feels like Christmas, with the retreat only weeks away". I thought she was teasing. Now I know better.

Smoke Tree Ranch is not a rugged sort of ranch. National garden tours make pilgrimages here to check “immaculately landscaped desert oasis” off their list of things to see. Our cottages were cozy, the weather was perfect, and the grounds were gorgeous.

[image:{"fid":"2385","width":"full","border":false}]

While staying at the ranch, we enjoyed a mixture of group discussions, activities, and one-on-ones. Directors gave passionate presentations, the team partook in thought-provoking round-table discussions, and we filled the dining hall with conversation at mealtimes. Because the majority of the team traveled West, impromptu morning hikes and yoga sessions began at dawn. There was time set aside to chill out each day after lunch that we could freely spend together or alone. Each afternoon at the golden hour, with the sun setting in pink and orange, we’d break into groups of about six people for circles, where we were encouraged to share our feelings and experiences without judgement. Some of the things my coworkers said were profound, vulnerable, or both, and I was thankful to learn so much about them.

[image:{"fid":"2384","width":"full","border":false}]

Nights at Smoke Tree Ranch

Evenings were my favorite. We’d crack open beers and enjoy some new way to share each night. We started with a trivia night, focused on facts about team members that helped us get to know each other. There was even a question about me! It was sort of a trick question; Jeff asked which Lullabot is dating Will Farrell—that is, my developer boyfriend, not Will Ferrell, the actor. The next night was dedicated to Ignite talks, hilarious five-minute presentations comprised of twenty auto-advancing slides. Juan Pablo Novillo Requena, a Spaniard affectionately known as “Juampy”, collected weird English idioms he’d heard from co-workers for months—from whoopee to mongongous—then shared his findings. Greg Dunlap taught us about the insane Swedish tradition of burning the Gävle goat on Christmas Eve. Somehow, we all ended up chanting, “Burn the goat!” Subsequent evenings brought the talent show and storytelling night: brave ‘Bots (myself not included) sang, played instruments, or told stories, a mix of funny, suspenseful, and heart-warming.

[image:{"fid":"2386","width":"full","border":false}]

Heading into the retreat, I was daunted by the prospect of getting to know the more than 50 people I hadn’t met in person before. Not only did I meet everyone, I formed deeper connections than I expected. I was able to see firsthand the amazing dynamic of the team, which is balanced between professionalism and fun. The dining hall (and the hours to follow) felt like a party with friends every night—oh yeah, and the final night brought an actual Lullabot party poolside with Mariachi band. Coworkers let down their hair and goofed off. That night, I played Cards Against Humanity and watched as a cadre of developers tossed a very tall C-level exec into the pool.

Welcome Home

As we wrapped up the retreat, I felt like part of the Lullabot family, and am honored to say so. Both on and offline, I have experienced the culture of the company, and I love what it stands for. I am getting into the rhythm of the schedule and specifics of the work. For those remaining things I'm still figuring out, I know that I am not alone. I still have a long way to go, especially as I look around at all of those who have been at Lullabot for many years. There's always more history to learn, and processes to explore, but I'm comfortable with whatever my Lullabot experience will bring. I know I’ll have my team behind me. I am so proud of them, their accomplishments and their passions, and am proud to be a part of such a supportive, flexible team.

Together Apart

Overall, our time together was informative, thought-provoking, productive, and I can't think of the last time I had so much fun. Between heckling and joking, the team kept me laughing all day. We came to learn more about our company and make it better, but still found time to soak up the sun by the pool with breathtaking mountain views.

[image:{"fid":"2383","width":"full","border":false}]

On one hand, I wish there were more weeks like these, because I enjoyed the company of my coworkers so much. I’ll miss my team until I see them again. Nevertheless, this retreat was special because it’s not the norm. Since we don't interact in an office every day, we can stretch beyond our comfort zones to connect with one another and really make the most of our time. Most of us chose to work at a distributed company because we enjoy uninterrupted spaces of concentration to do great work. We’re recharged by solitude. We like our own space.

]]>https://www.lullabot.com/articles/getting-together-when-you-work-apart1687 at https://www.lullabot.comThu, 02 Apr 2015 17:00:00 GMTDrupal's Form API helps developers build complex, extendable user input forms with minimal code. One of its most powerful features, though, isn't very well known: the #states system. Form API #states allow us to create form elements that change state (show, hide, enable, disable, etc.) depending on certain conditions—for example, disabling one field based on the contents of another. For most common Form UI tasks, the #states system eliminates the need to write custom JavaScript. It eliminates inconsistencies by translating simple form element properties into standardized JavaScript code.

Each form field consist of states and remote conditions that define the properties of the field. A state is a property that can be applied to a form element (i.e. enabled, disabled, checked, unchecked), while a remote condition is the state of an element to trigger a change in different element. All the available states and remote conditions are defined in the drupal_process_states(). Wunderkraut also has a great article, quoting our Co-Founder and CEO Jeff Robbins breaking down the states into two categories, "the ones that trigger change in others" and "the ones that get applied onto elements".

Starting with a simple form, let's look at a few examples.

Here is a simple form to collect some basic personal info from a user. [image:{"fid":"2294","width":"full","border":false}]

We would like the name field and anonymous checkbox to work together. We can simply use the invisible state to hide the name field when the anonymous checkbox is checked, and conversely keep the anonymous checkbox unchecked if the name field is filled.

There is also a special syntax for the exclusive or (XOR) operator, which one or the other condition is true, but not both. While the OR operator is implied by using an non-associative array, the XOR operator needs to be explicitly defined with an array item with the string 'xor'. Let's change the email field to accommodate users that prefer to be anonymous.

Conclusion

Form API #states is a great way to generate consistent JavaScript code for simple form interactions. Although custom JavaScript might be necessary for more complex requirements, the #states system gives us a very good start in creating centralized and standardized frontend code. I find the #state system generally much cleaner in terms of code maintenance. Compared to custom JavaScript, it's also less prone to bugs and accessibility issues.

Further reading

]]>https://www.lullabot.com/articles/form-api-states1646 at https://www.lullabot.comThu, 26 Mar 2015 18:00:00 GMTThis year we have a variety of presentations for you at DrupalCon LA. These all come out of the hard work we're doing all year round on projects such as Tesla, Syfy, SNL, NBC, Bravo (to name a few), and also within the Drupal community.

Sally Young, Carwin Young, & Wes Ruvalcaba
Front-end web development is evolving fast and selecting the right tools to use and when to use them is key to building successful solutions. Knowing why you might incorporate new techniques and what's a good fit for your needs can be challenging with so many choice available, whilst balancing client needs, team efficiency and code quality.

Greg Dunlap
It's the very first meeting with your shiny new client. A blank slate, the opening steps of a potentially year-plus long project. This is where it all begins: discovery. [...] A lot of devs get thrown into this process with no framework or roadmap of how to manage it. This talk will give them one[...]

Dave Reid
The Drupal 8 Contrib Media Team is making good progress on our master plan for media handling in Drupal 8. We'd like to share what we've done so far, what is left to do, and how everything fits together into the vision we have for D8 Media.

Amber Himes Matz, Joe Shindelar, & Greg Dunlap
Over the years the documentation available on drupal.org has grown, and expanded, and then grown some more. But our tools, policies, and processes for maintaining it haven't always kept up with that unchecked growth. So how can we as a community update the way we do documentation to make it as well structured, thought out, and maintained as the code base it seeks to document?

Marissa Epstein
[...] discuss fundamental principles of cognitive and behavioral psychology, and how they apply to creating smooth user experiences. See the mistakes even intelligent people can make, and how you should handle them. Human-proof your designs by feeding the lizard brain, and help website visitors skip complicated thinking with simple interactions.

Joe Shindelar
The Drupal 8 plugin system provides a set of guidelines and reusable code components that allow developers to expose pluggable functionality within their code and (as needed) support managing these components through the user interface. Understanding the ins and outs of the plugin system will be critical for anyone developing modules for Drupal 8.

Jared Ponchot
Open Source software has been conceived of, created and maintained by distributed teams, and several companies like the one I work for (Lullabot) have formed that leverage this same distributed model for a business. It's a model that seems natural for software development. But what about for other disciplines like design?

Jeff Eaton
Better HTML-focused WYSIWYG tools aren't enough, adding more and more fields to the mix only complicate editors' lives, and the principles of semantic HTML don't solve the deeper problem. The work of content modeling must extend inside the body field, not just wrap around it, and that requires a more holistic approach to the design and architecture of a Drupal site.

Seth Brown, Mike Herchel, & Chris Albrecht
In this session the combined Lullabot and Syfy teams will discuss what was involved in creating what's been called "the best television website on the planet," as well as what lies ahead for television sites in general.

]]>https://www.lullabot.com/articles/drupalcon-2015-lullabot-sessions1684 at https://www.lullabot.comWed, 25 Mar 2015 14:54:56 GMTHave you heard about the Drupal 8 Accelerate fund? The Drupal Association is collaborating with Drupal 8 branch maintainers to provide grants for those actively working on Drupal 8, with the goal of accelerating its release.

Here at Lullabot, we can’t imagine a more worthy effort, and we’ve been chipping in to get D8 shipped for quite some time now. Along with community contributions from bots like Dave Reid, Matthew Tift, and Marc Drummond; we’ve also been investing quite a bit to spread the D8 message through Drupalize.Me’s Drupal 8 video tutorials. We even highlighted the D8 Accelerate fund on the Drupalize.Me podcast last December.

Still, we didn’t feel this was enough, so Lullabot, along with Drupalize.Me, just contributed $5,000 to Drupal 8 Accelerate as an anchor donor! We couldn’t be more excited. This money will directly fund grants to the community, and help pay directly those who are working to get Drupal 8 out the door.

These grants have already started being awarded, and we’re eager to see continued impact. At Lullabot, we’re thankful for everyone who’s contributing to Drupal 8, and we’d like to encourage people to donate to Drupal 8 Accelerate however they can here.