Planet Drupal

Something that inspired me recently to write about DUG, are the efforts of MediaCurrent. Media Current has recently been pushing forward a series of postings talking about how they are giving back and being a lot more open about use of time to give back (which is awesome).

DrupalCon Bogotá just finished up, and critical issue-wise we've managed to stay in the 50s for a few days (down from a high of 150 last summer!), so now seems like as good a time as any to write down what's left to ship Drupal 8!

This post will attempt to document all of the remaining 55 criticals (as of this writing), and attempt to offer a somewhat "plain English" (or at least "Drupal English" ;)) description of each, loosely categorized into larger areas in which we could really use extra help. There are over 2,600 contributors to Drupal 8 at this time, please join us!

(Note: These descriptions might not be 100% accurate; this is my best approximation based on the issue summary and last few comments of each issue. If I got the description of your pet issue wrong, please update your issue summary. ;))

Within this list, there are numerous "markers" used to signify that some of the issues in this list are more important to fix ASAP. These are:

D8 upgrade path: An issue tagged D8 upgrade path (currently, 13) means it blocks a beta-to-beta upgrade path for Drupal 8, generally because they materially impact the data schema or they impact security. Once we resolve all of these blockers, early adopters will no longer need to reinstall Drupal between beta releases, but can just run the update.php script as normal. This is currently our biggest priority.

Blocker: An issue tagged blocker (currently, 5) means it blocks other issues from being worked on. This is currently our second-biggest priority (or 0th priority in the case an issue blocks a D8 upgrade path issue :D). I've noted these as "sub-bullets" of the issues that are blocking them.

Postponed: Issues that are marked postponed (currently, 9) are either currently blocked by one of the "Blocker" issues, or we've deliberately chosen to leave off until later.

>30 days: These patches have a patch more than 30 days old, and/or were last meaningfully commented on >30 days ago. If you're looking for a place to start, re-rolling these is always helpful!

No patch: This issue doesn't have a patch yet. Oh the humanity! Want to give it a shot?

"PP-3" means "this issue is postponed on 3 other issues" (PP-1 means 1 other issue; you get the drift).

Current state of critical issues

Sections roughly organized from "scariest" to "least scary" in terms of how likely they are to make Drupal 8 take a longer time to come out.

Security

Because Drupal 8 hasn't shipped yet, it's not following Drupal's standard Security Advisory policy, so there are still outstanding, public security issues (13 as of this writing). We need to resolve most of these prior to providing a Drupal 8 beta-to-beta upgrade path, as this is the time when we signal to early adopters that it's an OK time to start cautiously building real sites on Drupal 8.

Skills needed: Various

Security Parity with Drupal 7

This class of security issue is to ensure that when Drupal 8 ships, it won't have any regressions security-wise relative to Drupal 7.

Port SA-CONTRIB-2015-039 to D8 (D8 upgrade path) SA-CONTRIB-2015-039 addressed two issues in Views module, a redirect and default permissions for disabled views. The first was fixed in D8, but access checks are still missing from a few views for the second.

Because of various intricate dependencies, the authentication part of Drupal 8 isn't yet converted to object-oriented code, and prevents us from further optimizing bootstrap. This set of issues fixes various problems with this part of the code, and ensures these important security APIs are complete and ready to ship.

REST user updates bypass tightened user account change validation (D8 upgrade path) Since Drupal 7, when you edit your user account, you have to provide the existing password when you want to change the password or e-mail. This security feature is currently by-passed by REST user updates as you can change the password or e-mail without providing the password.

External caches mix up response formats on URLs where content negotiation is in use (>30 days) Drupal 8's request processing system is currently based on content negotiation (which allows you to serve multiple versions of a document at the same URI based on what headers are sent e.g. Accept: text/html or Accept: application/json). This is generally considered the "right way" to do REST. However, various external caches and CDNs have trouble with this mechanism, and can mix them up and can send random formats back. The issue proposes changing from content negotiation to separate, distinct paths such as /node/1.json.

New security improvements

These issues affect new security improvements we want to make over and above what Drupal 7 does.

[meta] Document or remove every SafeMarkup::set() call One of the big security improvements in Drupal 8 is the introduction of Twig's autoescape feature, which ensures that all output to the browser is escaped by default. However, this is quite a big change that requires all of the code that was previously escaping content to stop doing that, else it gets double-escaped (so you start seeing &lt; and &quot; and whatnot in the UI). We originally introduced the ability to manually mark markup safe with SafeMarkup::set(), but the recommended approach is actually to use Twig everywhere, so this issue is to ensure that all remaining instances of the manual way are fixed, or at least documented to explain why they're using the non-recommended method.

Tied with security, 13 of the remaining issues are tagged Performance. While it may seem odd/scary to have this be a big chunk of the work left, it's a common practice to avoid premature optimization, and instead focus on optimization once all of the foundations are in place.

Skills needed: Profiling, caching, optimization, render API

Profiling

Here are a sub-set of issues where we need performance profiling to determine what gives us the biggest bang for our effort.

Profile/rationalise cache tags Drupal 8's caching API introduces the notion of cache tags, allowing for much more focused and targeted cache clears for much better performance. This issue involves investigating our usage of cache tags in D8 and seeing how they could be optimized/improved.

Fix regressions relative to Drupal 7

[meta] Resolve known performance regressions in Drupal 8 This is the main tracking issue in this space. During the 8.x cycle we've introduced several known performance regressions compared to Drupal 7 (sometimes to make progress on features/functionality, other times because we introduced changes that we hoped would buy us better scalability down the line), which we need to resolve before release so that Drupal 8 isn't slower than Drupal 7. The performance team meets weekly and tracks their progress in a detailed spreadsheet.

Add cache wrapper to the UrlGenerator In Drupal 8, the url() function has been replaced by the UrlGenerator class instead. This issue is proposing to add caching to make it able to not re-do work once it's already generated a given URL on the page.

Schema for newly defined entity types is never created (D8 upgrade path) When you first install a module that defines an entity type (for example, Comment), its database tables are correctly generated. However, if an entity definition is later added by a developer to an already-installed module, the related database schema won't get created, nor will it be detected in update.php as an out-of-date update to run.

FileFormatterBase should extend EntityReferenceFormatterBase (D8 upgrade path) Entity Reference fields define a EntityReferenceFormatterBase class, which contains logic about which entities to display in the lookup, including non-existing entities and autocreated entities. File field's FileFormatterBase class currently duplicates that logic, except it misses some parts, including access checking, which makes this a security issue. The issue proposes to simply make File field's base class a sub-class of Entity Reference's, removing the need of "sort of but not quite the same" code around key infrastructure.

[META] Untie content entity validation from form validation Despite all the work to modernize Drupal 8 into a first-class REST server, there still remain places where validation is within form validation functions, rather as part of the proper entity validation API, which means REST requests (or other types of workflows that bypass form submissions) are missing validation routines. This meta issue tracks progress of moving the logic to its proper place.

Entity forms skip validation of fields that are edited without widgets (>30 days) If a field can be edited with a form element that is not a Field API widget, we do not validate its value at the field-level (i.e., check it against the field's constraints). Fixing this issue requires ensuring that all entity forms only use widgets for editing field values.

Entity forms skip validation of fields that are not in the EntityFormDisplay (No patch, >30 days) Drupal 8 has a new feature called "form modes" (basically analogous to "view modes" in Drupal 7, except allowing you to set up multiple forms for a given entity instead). Currently, we're only validating fields that are displayed on a given form mode, even though those fields might have validation constraints on other fields that are not displayed. Critical because it could present a security issue.

Views

Views issues are generally tracked with the VDC tag. There are currently 6 criticals at this point which touch on Views (some already covered in earlier sections).

The configuration system is remarkably close to being shippable! Only 4 critical issues left. We're now working on finalizing the niggly bits around edge cases that involve configuration that depends on other configuration.

Don't install a module when its default configuration has unmet dependencies (D8 upgrade path) Seems like a good idea. :P Basically handles the situation where a module provides some default configuration (say, a default View), which references a dependency on some other module (say, an Entity Reference field). You want to ensure that the module's default configuration can't be installed unless all the various dependencies it needs are there.

This subset of issues are things that are part of core currently, and we would really like to keep, but are willing to make some hard choices in the event they are among the last remaining criticals blocking release. The "postponed" among this list means "postponed until we're down to only a handful of criticals left." If these issues end up remaining in the list, we will move their functionality to contrib, and hope to add it back to core in a later point release if it gets fixed up.

[meta] Drupal.org (websites/infra) blockers to a Drupal 8 release (Blocker) This issue contains a "grab bag" of Drupal.org blockers that prevent an optimal Drupal 8 release, including things like semantic versioning support, testing support for multiple PHP/database versions, and support for Composer-based installations. If this issue is one of the last remaining criticals, we might choose to ship Drupal 8 anyway, and jettison one or more features in the process, such as…

[Meta] Make Drupal 8 work with PostgreSQL The meta/planning issue for fixing PostgreSQL (both in terms of functionality and in terms of failing tests). bzrudi71 is predominantly leading the charge here and making steady progress, but more hands would be greatly appreciated.

[meta] Database tests fail on SQLite (>30 days) Same deal as PostgreSQL but for SQLite. Unlike PostgreSQL though, this one doesn't have anyone leading the charge at this time, and it's also a lot harder to punt this to contrib, since we use it for various things such as testbot. Help wanted!

These are all basic things we need to keep on top of between now and release, to ensure that when we're down to only a handful of criticals, we're ready to ship a release candidate. The good news is, these are also all generally really easy patches to make, and often also to test.

[meta] Ship minified versions of external JavaScript libraries (Postponed) Basically, in the Gilded Mobile Age™ we want to ensure that we're sending as little over the wire as possible, so scrunching various JS libraries down to the smallest possible file size needs to be the default. Separate issue from above because it needs to happen for both updated and existing JS libraries. Postponed because there'll be less work to do once all of the out-of-date JS libraries are updated and minified at the same time.

[meta] Provide a beta to beta upgrade path (D8 upgrade path, Postponed) A policy issue that documents what holds up a beta-to-beta upgrade path, and what happens after we ship an "upgrade path beta." Postponed until all other critical D8 upgrade path issues are fixed.

However, _system_path in particular is used a ton, since it's very common to want to know the path of the current request. The patch exposes a "CurrentPath" service instead, which eliminates all of those issues.

Potential data loss: concurrent node edits leak through preview Because the temp store that Drupal 8's new node preview system employs uses an entity's ID as the key, rather than something uniquely identifiable to a user, if two users are editing the same node and hit preview at the same time, one of them is going to lose data due to a race condition.

Sorry this post was so long (and probably has its share of inaccuracies) but I hope it will be helpful to some. It's basically what I needed to get back up to speed after taking a few months off of Drupal 8, so figured I'd document my way to understanding.

Drupal’s default login page form is functional but does leave a lot to be desired. It’s pretty bland and, if left as-is, is always a telltale sign that your site is a Drupal website. The Super Login Module for Drupal 7 is a simple way to improve the look and functionality of Drupal's login page.

If you're using Behat and the Drupal Extension, you might find the following code snippet helpful if you want to add a step to wait for batch jobs to finish.

If one of your Behat scenarios kicks off a batch job (e.g., a Feeds import), and you want to wait for that batch job to finish before moving on to the next step, add this step definition in your FeatureContext.php file:

One of the coolest things about Lullabots is their desire to teach and share their knowledge. They do this in many formats: podcasts, articles, presentations, and even writing books. Joe Fender and Carwin Young decided there was an absolute need to write a book that brings all aspects of Front-End tools, frameworks, concepts, and procedures into one place — Front-End Fundamentals.

We are serious about Drupal. Our relationship lasts for already 7 years by now. Today is St. Valentine’s Day — a good day to express our love to Drupal. Drupal united us and allowed making new friends, so it IS awesome and incredibly cool without any doubt! So here’s few reasons we love it (just listen to it, sounds like an ode to a real loved one):

Drupal community talks a lot about best practices. When I talk about best practices I mean code driven development, code reviews, SCRUM, automated tests… I immediately realised that introducing new ways of working is not going to be easy. So I figured, why not asking one of the smart people how to start. Amitai (CTO of Gizra) was very kind to have […]

So, like a bunch of other Drupal people, we're also experimenting with Drupal 8; for our Drupal distro OpenLucius. Us being 'less is more'-developers, one aspect we really like is dependency injection.

For those not familiar with me, a little research should make it clear that I am the person behind the testbot deployed in 2008 that has revolutionized Drupal core development, stability, etc. and that has been running tens of thousands of assertions with each patch submitted against core and many contributed modules for 6 years.

My intimate involvement with the testbot came to a rather abrupt and unintended end several years ago due to a number of factors (which only a select few members of this community are clearly aware). After several potholes, detours, and bumps in the road, it became clear to me the impossibility of maintaining and enhancing the testbot under the policies and constraints imposed upon me.

Five years ago we finished writing an entirely new testing system, designed to overcome the technical obstacles of the current testbot and to introduce new features that would enable an enormous improvement in resource utilization that could then be used for new and more frequent QA.

Five years ago we submitted a proposal to the Drupal Association and key members of the community for taking the testbot to the next level, built atop the new testing system. This proposal was ignored by the Association and never evaluated by the community. The latter is quite puzzling to me given:

the importance of the testbot

the pride this open source community has in openly evaluating and debating literally everything (a healthy sentiment especially in the software development world)

I had already freely dedicated years of my life to the project.

The remainder of this read will:

list some of the items included in our proposal that were dismissed with prejudice five years ago, but since have been adopted and implemented

compare the technical merits of the new system (ReviewDriven) with the current testbot and a recent proposal regarding "modernizing" the testbot

provide an indication of where the community will be in five years if it does nothing or attempts to implement the recent proposal.

This read will not cover the rude and in some cases seemingly unethical behavior that led to the original proposal being overlooked. Nor will this cover the roller coaster of events that led up to the proposal. The intent is to focus on a technical comparison and to draw attention to the obvious disparity between the systems.

About Face

Things mentioned in our proposal that have subsequently been adopted include:

paying for development primarily benefiting drupal.org instead of clinging to the obvious falacy of "open source it and they will come"

paying for machine time (for workers) as EC2 is regularly utilized

utilizing proprietary SaaS solutions (Mollom on groups.drupal.org)

automatically spinning up more servers to handle load (e.g. during code sprints) which has been included in the "modernize" proposal

Comparison

The following is a rough, high-level comparison of the three systems that makes clear the superior choice. Obviously, this comparison does not cover everything.

Baseline
Backwards modernization
True step forward
System
Current qa.drupal.org
"Modernize" Proposal
ReviewDriven
Status
It's been running
for over 6 years
Does not exist
Existed 5 years ago at ReviewDriven.com
Complexity
Custom PHP code and Drupal
Does not make use of contrib code
Mish mash of languages and environments: ruby, python, bash, java, php, several custom config formats, etc.

Will butcher a variety of systems from their intended purpose and attempt to have them all communicate

Adds a number of extra levels of communication and points of failure
Minimal custom PHP code and Drupal

Uses commonly understood contrib code like Views
Maintainability
Learning curve but all PHP
Languages and tools not common to Drupal site building or maintenance

Vast array of systems to learn and the unique ways in which they are hacked
Less code to maintain and all familiar to Drupal contributors
Speed
Known; gets slower as test suite grows due to serial execution
Still serial execution and probably slower than current as each separate system will add additional communication delay
An order of magnitude faster thanks to concurrent execution

Limited by the slowest test case

*See below
Extensibility (Plugins)
Moderately easy, does not utilize contrib code so requires knowledge of current system
Several components, one on each system used

New plugins will have to be able to pass data or tweak any of the layers involved which means writing a plugin may involve a variety of languages and systems and thus include a much wider breadth of required knowledge
Much easier as it heavily uses commons systems like Views

And all PHP
Security
Runs as same user as web process
Many more surfaces for attack and that require proper configuration
Daemon to monitor and shutdown job process, lends itself to Docker style with added security
3rd party integration
Basic RSS feeds and restricted XML-RPC client API
Unknown
Full Services module integration for public, versioned, read API and write for authorized clients
Stability
When not disturbed, has run well for years, primary causes of instability include ill-advised changes to the code base

Temporary and environment reset problems easily solved by using Docker containers with current code base
Unknown but multiple systems imply more points of failure
Same number of components as current system

Services versioning which allows components to be updated independently

Far less code as majority depends on very common and heavily used Drupal modules which are stable

2-part daemon (master can react to misbehaving jobs)

Docker image could be added with minimal effort as system (which predates Docker) is designed with same goals as Docker
Resource utilization
Entire test suite runs on single box and cannot utilize multiple machines for single patch
Multiple servers with unshared memory resources due to variety of language environments

Same serial execution of test cases per patch which does not optimally utilize resources
An order of magnitude better due to concurrent execution across multiple machines

Completely dynamic hardware; takes full advantage of available machines.

*See below
Human interaction
Manually spin up boxes; reduce load by turning on additional machines
Intended to include automatic EC2 spin up, but does not yet exist; more points of failure due to multiple systems
Additional resources are automatically turned on and utilized
Test itself
Tests could be run on development setup, but not within the production testbot
Unknown
Yes, due to change in worker design.

A testbot inside a testbot! Recursion!
API
Does the trick, but custom XML-RPC methods
Unknown
Highly flexible input configuration is similar to other systems built later like travis-ci

All entity edits are done using Services module which follows best practices
3rd party code
Able to test security.drupal.org patches on public instance
Unknown, but not a stated goal
Supports importing VCS credentials which allows testing of private code bases and thus supports the business aspect to provide as a service and to be self sustaining

Results and configuration permissioned per user to allow for drupal.org results to be public on the same instance as private results
Implemented plugins
Simpletest, coder
None exist
Simpletest, coder, code coverage, patch conflict detection, reroll of patch, backport patch to previous branch
Interface
Well known; designed to deal with display of several 100K distinct test results; lacks revision history; display uses combination of custom code and Views
Unknown as being built from scratch and not begun

Jenkins can not support this interface (in Jenkins terminology multiple 100K jobs) so will have to be written from scratch (as proposal confirms and was reason for avoiding Jenkins in past)

Jenkins was designed for small instances within businesses or projects, not a large central interface like qa.drupal.org
Hierarchical results navigation from project, branch, issue, patch

Capable of displaying partial results as they are concurrently streamed in from the various workers
Speed and Resource Utilization

Arguably one of the most important advantages of the ReviewDriven system is concurrency. Interestingly, after seeing inside Google I can say this approach is far more similar to the system Google has in place than Jenkins or anything else.

Systems like Jenkins and especially travis-ci, which for the purpose of being generic and simpler, do not attempt to understand the workload being performed. For example Travis simply asks for commands to execute inside a VM and presents the output log as the result. Contrast that with the Drupal testbot which knows the tests being run and what they are being run against. Why is this useful? Concurrency.

Instead of running all the test cases for a single patch on one machine, the test cases for a patch may be split out into separate chunks. Each chunk is processed on a different machine and the results are returned to the system. Because the system understands the results it can reassemble the chunked results in a useful way. Instead of an endlessly growing wait time as more tests are added and instead of having nine machines sitting idle while one machine runs the entire test suite all ten can be used on every patch. The wait time effectively becomes the time required to run the slowest test case. Instead of waiting 45 minutes one would only wait perhaps 1 minute. The difference becomes more exaggerated over time as more tests are added.

In addition to the enormous improvement in turnaround time which enables the development workflow to process much faster you can now find new ways to use those machine resources. Like testing contrib projects against core commits, or compatibility tests between contrib modules, or retesting all patches on commit to related project, or checking what other patches a patch will break (to name a few). Can you even imagine? A Drupal sprint where the queue builds up an order of magnitude more slowly and runs through the queue 40x faster?

Now imagine having additional resources automatically started when the need arises. No need to imagine...it works (and did so 5 years ago). Dynamic spinning up of EC2 resources which could obviously be applied to other services that provide an API.

This single advantage and the world of possibility it makes available should be enough to justify the system, but there are plenty more items to consider which were all implemented and will not be present in the proposed initiative solution.

Five Years Later

Five years after the original proposal, Drupal is left with a testbot that has languished and received no feature development. Contrast that with Drupal having continued to lead the way in automated testing with a system that shares many of the successful facets of travis-ci (which was developed later) and is superior in other aspects.

As was evident five years ago the testbot cannot be supported in the way much of Drupal development is funded since the testbot is not a site building component placed in a production site. This fact drove the development of a business model that could support the testbot and has proven to be accurate since the current efforts continue to be plagued by under-resourcing. One could argue the situation is even more dire since Drupal got a "freebie" so to speak with me donating nearly full-time for a couple of years versus the two spare time contributors that exist now.

On top of lack of resources the current initiative, whose stated goal is to "modernize" the testbot, is needlessly recreating the entire system instead of just adding Docker to the existing system. None of the other components being used can be described as "modern" since most pre-date the current system. Overall, this appears to be nothing more than code churn.

Assuming the code churn is completed some time far in the future; a migration plan is created, developed, and performed; and everything goes swimmingly, Drupal will have exactly what it has now. Perhaps some of the plugins already built in the ReviewDriven system will be ported and provide a few small improvements, but nothing overarching or worth the decade it took to get there. In fact the system will needlessly require a much rarer skill set, far more interactions between disparate components, and complexity to be understood just to be maintained.

Contrast that with an existing system that can run the entire test suite against a patch across a multitude of machines, seamlessly stitch the results together, and post back the result in under a minute. Contrast that with having that system in place five years ago. Contrast that with the whole slew of improvements that could have also been completed in the four years hence by a passionate, full-time team. Contrast that with at the very least deploying that system today. Does this not bother anyone else?

Contrast that with Drupal being the envy of the open source world, having deployed a solution superior to travis-ci and years earlier.

DrupalCon Los Angeles will be the first Con where Drupal.org, home of Drupal and the Drupal community, has its very own track.

The track will feature presentations from the Drupal Association Engineering Team, where they share long and short term plans for website development, demo new and upcoming features, and gather community feedback.

A limited amount of spots are available for sessions submitted from the community. That’s where you come in.

Have you ever wished you could just type one command and load up all of the things you need to work on for a project? Wouldn’t it be nice to have your terminal set up with the correct Drush alias, tailing the watchdog, with access to your servers just a couple keystrokes away? Sounds nice, right?

The purpose of the presentation was to describe how to use reusable tools and processes, tailored and in constant evolution, in order to finally defeat waterfall and guarantee delivered value in the development of websites and web applications.

This is a huge amount of material, based on both my successful and unsuccessful experiences, and I earnestly hope it will help other web centered knowledge workers. If you have questions, please ask them on twitter @victorkane with hashtag #DurableDrupalLean.
There were quite a few other fascinating and very good presentations on the subject of Process and DevOps, overlapping my own substantially and it should be very worthwhile to share them here:

OpenLayers module is a popular solution for mapping in Drupal. The biggest benefit is the ability to use different map providers, complete Feature support and, last but not least, the simplicity of creating custom markers.

In 2014 we received over 200 DrupalCon grant and scholarship applications. Thanks to our generous sponsor contributions, we were able to get over 60 individuals to DrupalCon Austin and Amsterdam. This year, we hope to award even more!

If you need help getting to DrupalCon Los Angeles, and are an active Drupal contributor or community leader, we're here to help you make YOUR dreams of attending DrupalCon a reality. Apply for a Grant or Scholarship!

Although we value the contributions of all the GCI participants but since this was a contest, there has to be winners. We are proud to announce our grand prize winners: Getulio Valentin Sanchez Ozuna (gvso: https://www.drupal.org/u/gvso) and Tasya Aditya Rukmana (tadityar: https://www.drupal.org/u/tadityar) who'll be attending an all expense paid trip to Google HQ in Mountain View California.

Google Summer of Code 2015 Announcement

GCI was fun, but now it is time for Google Summer of Code 2015 @ http://www.google-melange.com/gsoc/homepage/google/gsoc2015. GSoC is an annual program for university students organized by Google with projects managed by open source organization mentors such as us (Drupal!). Are you or any colleagues available to be a mentor and/or provide a project idea? Please share project ideas even if you're not available to be a mentor in our wiki @ https://groups.drupal.org/node/455978. This is perfect timing for our our community and GSOC as Drupal 8 is almost stable providing plenty of projects to port common modules.

Did you know each accepted organization sends two mentors on an all expense paid trip to visit GooglePlex for the "Mentor Summit"? Organization applications started February 9th and we're currently working on our organization application. We'd like to apply with at least 30 solid project ideas, so if you have ideas for any project that might be suitable for GSoC, add them our wiki @ https://groups.drupal.org/node/455978. If you are unsure whether or not your project idea will be a good fit for GSoC, have a look at the projects from GSoC 2014 @ http://www.google-melange.com/gsoc/org2/google/gsoc2014/drupal.

If you're a student, you can start by reading our getting started guide for GSoC @ https://www.drupal.org/node/2415225. Below is some useful information which may help you get selected in GSoC this year.

Contact Drupal's org admins (Slurpee, slashrsm, cs_shadow) if you have any questions

Hangout in #drupal-google answering student questions

Drupal's GSoC Office Hours (help in real time!)

Mentors are available on IRC in #drupal-google @Freenode thrice each weekday for one hour from March 16th until March 27th. Join us in real time at scheduled times below to chat with mentors in real time to ask questions, request application reviews, or simply hangout.

Asia/Australia 04:00 - 05:00 UTC (IST 09:30-10:30)

Europe 13:00 - 14:00 UTC (CET 14:00-15:00)

Americas 18:00 - 19:00 UTC (PDT 11:00-12:00)

Contributing to Drupal

Did you know many successful students started with zero Drupal experience prior to GSoC? If new to Drupal and willing to contribute, come to participate in core contribution mentoring. It helps anyone without any experience to get started with Drupal contribution development. Google wants to see students contributing to organizations prior to the starting of their GSoC project and this is a chance to demonstrate your skills. Office hours provide a chance for students that have problems with their patches or can't find issues to work on to seek guidance. Create an account at http://drupalmentoring.org before you participate in core mentoring. Drupal core contribution office hours are Tuesdays, 02:00 - 04:00 UTC AND Wednesdays, 16:00 - 18:00 UTC. If you need help outside of office hours, join #drupal-contribute to chat with the community members willing to assist 24/7.

We are pleased to announce that we are now accepting applications from students to participate in Google Summer of Code 2015. Please check out the FAQs [1], timeline [2], and student manual [3] if you are unfamiliar with the process. You can also read the Melange manual if you need help with Melange [4]. The deadline to apply is 27 March at 19:00 UTC [5]. Late proposals will not be accepted for any reason.

In a previous post, Dave talked about marginal gains and how, in aggregate, they can really add up. We recently made some infrastructure improvements that I first thought would be marginal, but quickly proved to be rather significant. We started leveraging Ansible for server creation/configuration and Jenkins to automate our code deployments.

We spend a lot of time spinning up servers, configuring them and repeatedly deploying code to them. As a Drupal-focused shop, this process can get repetitive very quickly. The story usually goes something like this:

This story is repeated over and over. New client, new server new deployments. How does that old programmer’s adage go? “Don’t Repeat Yourself?” Well, we finally got around to doing something about all of this server configuration and deployment repetition nonsense. We configured a Jenkins server to automatically handle our deployments and created Ansible roles and playbooks to easily spin up and configure new servers (specifically tuned for Drupal) at will. So now our story looks something like this:

“Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.”

Sounds like voodoo magic doesn’t it? Well, I’m here to tell you it isn’t, that it works, and that you don’t have to be a certified sysadmin to use it. Though you may need one to set it all up for you. The basic premise is that you create “playbooks” to control your remote servers. These can be as complex as a set of steps to build a LAMP server up from scratch (see below), or as simple as a specific configuration that you wish to enforce. Typically, playbooks are made up of “roles”. Roles are “reusable abstractions” as their docs page explains. You might have roles for installing Apache, adding Git, or adding a group of user’s public keys. String your roles together in a YAML file and that’s a playbook. Have a look at the official Ansible examples GitHub repo to see some real life examples.

Automate Server Creation/Configuration with Ansible

We realized we were basically building the same Drupal-tuned servers over and over. While the various steps for this process are well documented, doing the actual work takes loads of time, is prone to error and really isn’t all that fun. Ansible to the rescue! We set out to build a playbook that would build a LAMP stack up from scratch, with all the tools we use consistently across all of our projects. Here’s an example playbook:

Benefits:

Consistent server environments: Adding additional servers to your stack is a piece of cake and you can be sure each new box will have the same exact configuration.

Quickly roll out updates: Update your playbook and rerun against the affected servers and each will get the update. Painless.

Add-on components: Easily tack on custom server components like Apache Solr by adding a single line to a server’s playbook.

Allow your ops team to focus on real problems: Developers can quickly create servers without needing to bug your ops guys about how to compile PHP or install Drush, allowing them to focus on higher priority tasks.

What is Jenkins?

“Jenkins is an award-winning application that monitors executions of repeated jobs, such as building a software project or jobs run by cron.”

Think of Jenkins as a very well-trained, super organized, exceptionally good record-keeping ops robot. Train Jenkins a job once and Jenkins will repeat it over and over to your heart’s content. Jenkins will keep records of everything and will let you know should things ever go awry.

Deploy Code Automatically with Jenkins

Here’s the rundown of how we’re currently using Jenkins to automatically deploy code to our servers:

The biggest benefit here is saving time. No more digging for SSH credentials. No more trying to remember where the docroot is on this machine. No more of the, “I can’t access that server, Bob usually handles…” nonsense. Jenkins has access to the server, Jenkins knows where the docroot is, and Jenkins runs the exact same deployment code every single time. The other huge win here, at least for me personally, is that it takes the worry out of deployments. Setting it up right the first time means a project lifetime of known workflow/deployments. No more worrying about if pushing the button breaks all the things.

What else is great about using Jenkins to deploy your code? Here’s some quick hits:

Historical build data: Jenkins stores a record of every deployment. Should a deploy fail, you can see exactly when things broke down and why. Jenkins records everything that happened in a Console Output tab.

Empower non server admins: Jenkins users can login to Jenkins and kick off manual deployments or jobs at the push of a button. They don’t need to know how to login via ssh or even how to run a single command from the command line.

Enforce Consistent Workflow: By using Jenkins to deploy your code you also end up enforcing consistent workflow. In our example, drush will revert features on every single deployment. This means that devs can’t be lazy and just switch things in production. Those changes would be lost on the next deploy!

Status Indicators across projects: The Jenkins dashboard shows a quick overview of all of your jobs. There’s status of the last build, an aggregated “weather report” of the last few builds, last build duration, etc. Super useful.

Slack Integration: You can easily configure jobs to report statuses back to Slack. We have ours set to report to each project channel when a build begins and when it succeeds or fails. Great visibility for everyone on the project.

Both of these tools have done wonders for our workflow. While there was certainly some up-front investment to get these built out, the gains on the back end have been tremendous. We’ve gained control of our environments and their creation. We’ve taken the worry and the repetition out of our deployments. We’ve freed up our developers to focus on the work at hand. Our clients are getting their code sooner. Our team members are interrupted less often. Win after win after win. If you’re team is facing similar, consider implementing one or both of these tools. You’re sure to see similar results.

Drupal is one of the largest and most successful open source projects, and much of our success is due to the vibrant and thriving community of contributors who make the platform what it is – the individuals who help put on Drupal Conferences and events, the documentation writers, the designers and usability experts, the developers who help write the software, and countless others.

Participating in open source communities is a rewarding experience that will help you advance and develop, both personally and professionally. Through participation, you gain an opportunity to learn from your peers. You are constantly challenged and exposed to new and interesting ideas, perspectives, and opinions. You are not only learning the current best practices, you are also helping develop innovative new solutions, which will improve the tools in your arsenal and take your career to the next level – not to mention contributing to your personal growth. (One of the five Drupal core committers for Drupal 8, Angie Byron got her start only a few years ago – as a student in the Google Summer of Code – and has rapidly advanced her skills and career through open source participation.)

Participation gives you significantly better insight and awareness. By attending Drupal events and engaging online, you place yourself in a better position to understand and leverage the solutions that are already available, know where and how to find those solutions, and have a clearer sense of how you can leverage them to achieve your goals. With this knowledge and experience you become capable of executing faster and more efficiently than your peers who don’t engage.

We kept things simple for this episode of the DDoD. The options element module uses Javascript to create an easy way to create radio button and checkbox options for fields on a Drupal content type. Before this module you had to add key|value for each options you wanted. Using this module the key and value is broken down into two fields making it easier to distinguish the difference.

In 2014 I got in contact with many other Drupal shops. We had lots of great discussions about the future of Drupal, the future of ERPAL and the industries other than publishing that could definitely take advantage of Drupal. What with all the new ideas and results from these personal contacts, I want to take a little time now to make the ERPAL roadmap in 2015 more transparent to you. All our activities in 2015 will align with our vision to make Drupal – via the ERPAL distributions – into the most flexible web-based framework available for business applications.
In some of my previous blog posts and the Drupal application module stack poster, I’ve shown why I think Drupal has all the components needed for flexible business applications.

As we’re almost done with the development work to release a first beta version of ERPAL Platform, the next steps need to be planned out. In 2015 we’ll focus on the following six roadmap activities:

What, exactly, do these roadmap steps mean? Here are the details on each one:

Teach other developers how to develop business applications with Drupal
Modeling business processes and implementing them in software isn’t an easy job. Over the last three years, we’ve discovered many best practices for analyzing processes and using Drupal for business applications; we want to share these with the Drupal community, so we’ll release more screencasts and blogposts covering the most important ones. Sticking to best practices like using a combination of rules, entities, fields, feeds, views, and commerce modules – all modules that can be extended easily with custom plugins – will keep Drupal applications flexible, extendible and maintainable.

Port ERPAL Platform to Drupal 8
Our goal is to have a first alpha release of ERPAL Platform ready six months after Drupal 8 has been released. Since there’s currently no reliable roadmap for the first Drupal 8 release, we can’t announce a fixed deadline. We’ve already started porting ERPAL Core to bring flexible resource planning to Drupal 8, but we do depend on the Drupal commerce roadmap for Drupal 8, which contains many improvements to the overall architecture of Drupal commerce. As soon as there’s a stable beta release of Drupal commerce, we’ll continue with our port of ERPAL Platform based on Drupal commerce 2.x.

Start our development partner network
In 2015 we’ll start our development partner program, building a network of qualified Drupal developers and shops who focus on the quality and flexibility of Drupal applications. Our development partners will benefit from our support in their projects as well as from new business opportunities stemming from our corporate marketing promoting them. For Drupal, this means more people striving to bring Drupal into other industries and increase its application range. This strategic goal is tightly related to the first roadmap activity, teach other Drupal developers to build business applications with Drupal.

1) that almost everyone agreed that Drupal is a better application framework than a CMS

2) that it’s perfectly suited for business applications because it’s open, flexible and can be integrated with other enterprise legacy software

What’s missing, however, are public project references with case studies showing potential clients the power of Drupal – not only for content sites but also for business applications in different industries and their integrations. With this promotion, we want to help our partners grow their business in this market while simultaneously increasing Drupal’s uptake in other vertical markets.

Release the Drupal update automation service, “Drop Guard”
The technology to automate Drupal updates, and security updates in particular, has already been in use for more than 2.5 years at Bright Solutions. We realized with Drupalgeddon that Drupal security updates are business critical: they need to be applied within minutes after their release! This year we want to launch Drop Guard as a service for Drupal developers to help shops and agencies keep their clients’ sites secured – automatically. The service will integrate with their CI deployment processes and help Drupal avoid the negative press of hacked sites. If you want to know how it actually works in our internal infrastructure and how it’s integrated with ERPAL, read my previous blog post.

Provide cloud app integration for ERPAL Platform
With ERPAL for Service Providers we created a Drupal distribution that gives service providers a centralized, web-based platform for managing all their business processes within one tool. The Drupal distribution, ERPAL Platform, provides Drupal users and site builders with a pre-configured distribution to build flexible business applications based on Drupal commerce and other flexible contrib modules. Since ERPAL Platform implements the full sales process – starting with first contact and sales activity; quotes, orders and invoicing; all the way through to reports – and a slim project controlling feature, we want to let users extend this solution easily and with the best vertical cloud tools out there. Via this solution, ERPAL Platform can integrate with cloud apps such as Jira, Trello, Mite, Redmine, Basecamp, Toggle and many others. This has the benefit that users can use ERPAL Platform as their central business process and controlling application while their project collaboration is supported by specialized platforms. The clear advantage is that agencies will save lots of time in project controlling and administration, as many processes can be automated across all integrated applications. Using Drupal as their centralized platform, they remain flexible and agile in their business development.

What about the roadmap for ERPAL for Service Providers?
ERPAL for Service Providers is currently very stable and is already being used by more than 30 of our known customers at Bright Solutions. We will continue to maintain this distribution, fix bugs and give support to all users. During the lifecycle of Drupal 8, we’ll port ERPAL for Service Providers to be based on ERPAL Platform. So, in the future, ERPAL Platform will be the base distribution for building a vertical use case for service providers.

JOIN THE AS IF COLLECTIVE on PATREON

The As If Collective is a network of roleplayers, gamers, GMs, Game Designers, artists and neophiles interested in exploring and contributing to experimental applications of narrative engineering. Every month I do 2-4 Releases: each is a minigame, a subsystem, an adventure, a table/chart, a form/sheet, or a web-based tool which will be of interest to Roleplayers, Storygamers and Interactive Fictioneers. As a member of the Collective, you get early access to all of these works in both draft and final form, with the added knowledge that you helped make them happen.Click Here.