Search is an important facet of any large website these days. We’d talked previously about why you want to take full control of your site search. Bombarding your users with a mess of links won’t do anyone any favors. One of our favorite solutions for this problem is Apache Solr and recently we had the opportunity to set it up on Drupal 8. Let’s take a moment to go through a bit of what that solution looked like and some thoughts along the way.

Setting the stage

Before we dive too far into the how, we really ought to give a bit of time to the why. More specifically we need to cover why we didn’t simply use one of the existing modules contributed to Drupal by the community. At the time, there was one prominent module group for implementing Solr search in Drupal 8. The Search API module in tandem with the Search API Solr Search module were really the most direct way to implement this advanced search on your site. These are great modules and for a different situation would have worked just fine. Unfortunately, the requirements we were working with for the project were more specific than these modules were equipped to handle.

There were three key things that we needed control over and we aren’t keen on hacking a module to get something like this done. We needed to have specific control over what was indexed into Solr. The Search API module allows for you to generically specify how fields are translated to the Solr index, but if you need some different handling you would either need multiple indexes or you would need to sacrifice some of that customization. The site also needed to make use of a fairly complicated feature of Solr, the more like this query. (Warning, incoming search jargon!) This query allows you to search the index for content relevant to another indexed piece of content. This relevancy is determined by fields you specify in the query and results can be limited to content that meets a certain relevancy score threshold.

The last thing we had to have in this was the ability to manage how often content was indexed. The existing modules allowed for this action to happen on a periodic cron, but wasn’t able to have the index updated as soon as changes were made to content. This project was going to have a lot of content updated each day and that meant we couldn’t afford to wait for things to be indexed and updated. With these three things creating hurdles to getting Solr implemented in this project it seemed like we were going to have to go another way, but after looking at some documentation we determined that creating our own implementation would not be so difficult.

Solr search with Solarium

Before we get too far ahead of ourselves, we should note that this wasn’t done with a contributable module in mind. That isn’t because we don’t like giving back the the community, we totally do, it was because it was created for a very specific client need. There will likely be a more generic version of this coming out down the road if demand is high enough. Also, we are under the impression that most use cases are covered by the modules mentioned above, so that would be where most would start. Enough with the disclaimers; let’s talk Solarium.

We went with Solarium as the Solr client to use for this. That is what most of the existing Drupal modules use and it seemed to be the most direct way to do this with PHP. Installing Solarium is pretty simple with Composer and Drupal 8. (If you aren’t using Composer yet, you really should be.) Using a client for communicating with a Solr instance isn’t specifically required. Ultimately, the requests are just simple HTTP calls, but the client saves you from having to memorize all of the admittedly confusing query language that comes with using Solr.

Installing Solarium can be done as simply as composer install "solarium/solarium". You could also do this by adding a line to your composer.json file in the require section for "solarium/solarium": “3.6.0”. Your approach on this part may vary, but this should be done from the root of your Drupal site so that this library goes into the global dependencies for the project. These instructions are also detailed a bit more in the official docs for Solarium, here. The official docs also have a bunch of example code that will help if you dive into this like we did.

For this implementation, we opted to create a Solr PHP class to do the heavy lifting and made use of a Drupal service for calls to it from the rest of the app.

The heart of the class is going to be the connection to Solr which is done through the Solarium client. We will make use of this client in our constructor by setting it up with the credentials and default settings for connection to our Solr instance. In our case, we used a config form to get the connection details and are passing those to the client. We wanted to use the configuration management system so that we could keep those settings consistent between environments. This allowed more accurate testing and fewer settings for developers to keep track of.

We are doing this in the constructor so that we don’t have to create a new client connection multiple times during a given call. In our case, we ended up using this as a Drupalservicewhich allows us to only have the Client object created once per call and gives a simple way to use this class throughout the app.

The next part is the actual search method. This does a lot and may not be clear from the code below. In this method, we take parameters passed in and build a Solr query. We have a helper function in this that does some specific formatting of the search terms to put it in the right query syntax. For most sites, this code would serve fine for doing generic searching of the whole index or having multiple versions for searching with specific filters

The code we’ve presented so far isn’t breaking new ground and for the most part does a similar job to the existing search modules available from the Drupal community. What really made us do something custom was the more like this feature of Solr. At the time that we were implementing this, we found that piece to be not quite working in one module and impossible to figure out in another, so we put our own together.

Thankfully with Solarium, this was a pretty simple query to tackle and we were able to have related content on the site without much other setup. We can create a new more like this query and submit an id so Solr knows which content to compare against for similarity. The rest of it behaves very similar to the search method presented previously. The results are still returned the same and we are able to do some other filtering to change the minimum relevancy score or number of rows.

We didn’t share all of the code used for this here, obviously. The point of this post isn’t to help others create an exact duplicate of this custom implementation of Solarium in Drupal 8. At the time of this writing, it seems that the existing Solr modules might be in great shape for most use cases. We wanted to point out that if you have to dip into code for something like this, it can certainly be done and without an insane amount of custom code.

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems -but- there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that wearedoing and assesswhywe think it’s working. If you want a high-level bird's-eye view, check out ourprocess page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind.

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation

This is the first part in a series on how not to ruin your life on your next Drupal project. Sound extreme? Well, if you’ve ever suffered the crushing defeat of working your tail off on a lengthy project only to sit there at the end after launch feeling like you just came out of the opening night of Star Wars: The Phantom Menace (ie: severely disappointed and a bit confused), then you know that it is indeed extreme. We spend a majority of our day at work and when it’s not rewarding or energy-giving, it’s a real drag.

So what is the formula? Well, a blog post isn’t going to solve all your problems -but- there are certainly key approaches that we have taken that have helped us avoid catastrophe time and time again. Translation? We’ve managed an extremely high customer satisfaction rate for over two decades. What’s been happening here seems to be working so we pay a lot of attention to what it is exactly that wearedoing and assesswhywe think it’s working. If you want a high-level bird's-eye view, check out ourprocess page. We are going to get a bit downer and dirtier here though.

Ultimately, we want you to go home to your family at the end of the day saying “GUESS WHAT I DID AT WORK TODAY EVERYONE!!” (like we do) instead of “Can we just order pizza and go to bed at 7?”.

We’ve identified 3 essential components to kicking a project off right, the first of which will be covered in this post. They are the following:

So let’s start with Aggressive and Invested Requirements Gathering. We spent a lot of time thinking about this and I realized it comes down to the adjectives. Everyone knows (mostly) about requirements gathering, but it’s a minefield of unasked questions, unanswered questions, misconceptions, forgetfulness, and chaos. The solution? Take ownership of this baby from the beginning and treat it like it’s your project - it’s your passion - and do what it takes to nail it down. Getting answers that make your life easier, despite your suspicions that the client is maybe not thinking it through, doesn’t help anyone. Take no shortcuts and care about everything.

“Take ownership of this baby from the beginning.”

Here are 3 specific goals:

Assess priorities (theirs and yours!)

Priorities are key because we can easily get hung up on things that ultimately aren’t that important. On the flip side, there are things that are tremendously important to one of the two parties, and hence, it must be important to both. So the client says I care most about X, then Y, then Z. In your head you’re thinking “Yikes, Z has a huge unknown element that I’d like to solve quickly to understand the implications.” So talk about it. Repeat their priorities back to them and state your own and find that happy middle ground where you can pursue the project in an efficient and effective way while also focusing on what matters. It sounds simple, but unspoken expectations or concerns are a plague in project management.

Determining constraints (time, money, features, personnel)

I still love the age-old project management triangle that says that for any given project, you can choose 1 of the 3 key priorities in a project: time, money or features. This means that you can’t simply dictate the budget and the schedule and also expect a very rigid set of requirements. The problem is that despite even stating this, there is a lot of pressure from the client to set the expectation on all three and that simply isn’t possible. So it’s critical early on to sort out what the real constraints are. Ok, you would like this to stay under $50k. Is that a hard cap or could you go over if you felt it was worth it? So you want this launched by January 1st. Is that more of a clean-sounding date or is this tied to a fiscal year, or some other real deadline? Ok, so you want features X, Y and Z. Which of those would be deal breakers to not have? This kind of questioning is very helpful because early on in the build phase, you can make intelligent decisions about how and when to collaborate with the client since you know the significance of obstacles or changes of directions that impact these things.

The last thing I’m throwing on top of this triangle is the concept of personnel. We’ve found that knowing who your stakeholders are, who your end users are, who your editors and admins are - early on - is critical. I’ve literally had meetings where we’re deep into requirements and then I meet the person who has veto power over everything and the thing goes sideways. We’ve learned as well that there is a repeating sales cycle when new stakeholders arrive because convincing the last three people doesn’t mean you’ve convinced the next three. I’ve also had times where a stakeholder makes some critical decisions, but then after talking to the people “on the ground”, I find that he was simply just wrong on some of the day-to-day operations. It’s good to talk to everyone, but also find out each person’s role in the big picture. Often times we’ve found ourselves advocating on behalf of lower level employees who often bring up important and practical issues that decision-makers are often overlooking. It’s a delicate balance, but if the system isn’t welcomed and adopted well by it’s primary users, the project will sink even if the ones writing the checks are getting what they think they want.

Reading between the lines

This is tied to the item above in a lot of ways, but stands on it’s own as an important point. When you’ve done this long enough, you learn that most of what is asked for by a potential client is not always really the point. Often there is a hidden goal or motivation that has led to the formation of a feature request. Even if that request perfectly solves the need, it’s still important to discover that need because it can affect the implementation and guide the specifics. For example, if a request is made to let users download an export of tracking data, but you dig and find out that actually they’re just using this tool to turnaround and upload it into a remote system and it’s a bit of a pain, maybe building a web service is better where their system can talk directly to ours and users can step out of the daily grind.

Conclusion

So in summary - gathering requirements the same way you date someone you’re thinking of marrying. Care about it and pursue it as if it’s the most important thing you’ve got going with an end goal of a lifetime of happiness.

Up Next: Running a Drupal project the right way: Part 2 - Relentless Ideation

For a current Drupal 7 project that uses Ubercart and Ubercart Recurring to provide for a subscription service, I need the ability for an admin user to be able to cancel a user's ongoing recurring fee when a subscription level is changed. I accomplished this with the following php rule:

<?php// load all recurring fees for a user$recurring_fees = uc_recurring_get_user_fees($user_uid);// loop through feesforeach ($recurring_fees AS $fee) {// cancel each feeuc_recurring_fee_cancel($fee->rfid);}?>

On a current site in development I am using Ubercart to provide a renewable subscription service. To make the user experience clean, I wanted to protect the user from going 'shopping' to add their subscription. To do this I decided to use a rule to add the product to the user cart when the user is created by an administrator or when the subscription is cancelled or fails payment. I tried the Ubercart Rules module, but this is mainly for dealing with orders and not carts, and did not contain the needed add to cart rule.

Lucky it is easy to make one for yourself using the php action in rules. The following is the needed code for adding a nid to a specified users cart:

I was playing with Navin, an Omega sub-theme, and wanted to make some minor adjustments to the CSS, and maybe have a custom template.php file to work with. The best way to do this is to create a sub-theme of the sub-theme (though I kind of wish there was a way to just add a plan old CSS file somewhere - and I feel like maybe there's a way to do that I'm spacing on).

To get this to work required a few steps. My use case is that I'm working within the Open Enterprise Drupal distribution to get a fairly simple Drupal-based blog set up. It comes with Navin as the default theme.

I'm not sure how it happened, but today I noticed that Drupal's menus were behaving very oddly. After upgrading to Drupal 6 and installing several additional modules, I noticed duplicate menu entries as well as other disturbing oddities. Items I was placing into the menu were not showing up. Menu items that I moved around were apparently saved but they did not appear properly in a dropdown context.

Looking further into it via the absolutely awesome SQLYog tool, I verified that there were dozens of duplicate entries. Some items were duplicated up to six times. It was a real mess.

The database tables involved here are menu_links and menu_router. These are two of the more mysterious tables in Drupal. I haven't had the need to spend much time with them in the past, and I know now that this is a good thing. Fortunately, you do not have to know anything about them to fix this problem. While I spent a couple hours carefully deleting items from this table, ultimately I gave up. I was able to remove all the duplicates, but the menus were still misbehaving. At this point, I just wanted to do a factory reset on the menus, but it's not so simple as flushing cache. However, that is not far from the solution.

This solution will do a 'factory reset' on your menus. You will lose any customizations you have made. However, all core and contrib module entries will be restored very nicely.

Enjoy the imported content at http://localhost/~username/openoutreach_dest.

Final words

I know that a push deployment plan can be used to exchange content between two actual sites, but remember that I aim to import the content from code when rebuilding a site from scratch, that's why I exported the content to a “feature module”; in my use case, the destination site in this article can easily be seen as a future incarnation of the source site itself.

Someone may also wonder if it is right at all to export content as code; well, in my case I really see this “default content” as configuration so having it stored as PHP code in a feature module makes totally sense to me.

Pantheon (heart)s Drush. We took care when assembling our DROPs infrastructure to maintain Drush access for developers, and we'll be building more command-line power tools over time. The magic that allows us to offer Drush (as well as rsync and sftp) without traditional shell access will be the subject of a longer "Inside Pantheon" post coming up, but the end-result is there for you now.

When you log in, your account overview screen will let you snag a compiled drushrc.php file:

On linux/MacOS you can drop this file into a .drush directory in your home. You can also put it in the aliases directory of your local Drush installation. Then run drush sa and you should get a list of aliases like so:

Some of the biggest questions that come up for developers with a next-generation platform like Pantheon are "how do I get my database synced?" or "what about migrating large amounts of files?"

You've asked and we've listened. In the past week we've deployed a couple updates to make moving your data around much easier, including new command-line capabilities for advanced users.

First, as the initial release of a series of updates we're making to the control panel we've added the ability to directly import database dumps and file archives using the web interface. This gives people the ability to essentially re-run the import process any time they like for just the pieces they need. This feature is accessed in the same place as database/file downloads (see image right).

We've also added an option to "Wipe content" for an environment. This will clear out the database and files area completely, and can be extremely helpful for developers working on installation profiles. After running this option you'll be back to a fresh install.php when you load the site.

Power Tools for Power Users
For many developers, a terminal is better than any web UI. As part of the work to improve our data-hauling workflow in the control panel, we've also been improving our command-line features. We pre-generate Drush alias files for all your sites — grab this from your account screen, add your SSH key and you're cleared to Drush.

You can now use Drush to connect directly to import import/export your database and files. For instance, getting your sql credentials is as easy as:

drush @pantheon.my-site-name.dev sql-connect

That will give you access the database instance for that site/environment. You can use this to pull or push database dump files directly, or connect a local client, including GUI clients.

That will sync the remote files directory to your desktop. Reversing the arguments will do the reverse. When using rsync it's important to be careful to use that trailing slash or else you'll end up copying one directory into another, rather than syncing their contents.

There's more technical documentation of these capabilities in the wiki, including how to use rsync directly if you prefer. If you are truly savvy, you can even make an SFTP connection to transfer files!

We'll be expanding and improving these features in the coming weeks, as well as providing more customized copy/paste snippits directly from the dashboard to get you Drushing in no time. Let us know what you think and what else you'd like to see.

Also, we do stuff like this all day and it's rad. If you're into this kind of work, we're hiring.

When a client asks for a way to pull content onto a site through RSS, the obvious choice is to use Drupal's Feeds module. I've never been really in love with this module but it does the job well. We recently had an interesting case that required extending the normal functionality of feeds to interact with custom content types.

The scenario was as follows:

The site lists many organizations and the events of those organizations.

Site users connect with and follow events and news from the various organizations.

Each organization may run several websites each having its own set of RSS feeds.

The organizations desired the ability to have their content added to their pages dynamically and without effort.

The user would visit the organization's page and see a list of news and articles from one or more of that organization's feeds.

The idea is fairly straight forward. But feeds does not support this kind of association by default.

If you're not familiar with feeds here's a brief rundown of how it works:

Feeds allows you to designate a content type to be the source of a feed, or it will create a feed content type for you.

You then create new nodes of this content type, adding the URL of the feed to be imported to each node.

A designated times, the feed importer will be used to fetch information from the designated RSS URL. The data will be added to Drupal as nodes. You can choose what kind of node you would like the imported data to be.

This was a Drupal 7 site and our idea was to use references to reference each feed importer to the organization in question. For example the feed importers for developer.apple.com would reference the organization node for Apple as would the feed importer for news.apple.com while the feed importers for developer.microsoft.com and news.microsoft.com would reference the Microsoft organization node.

Make sense so far?

We then created a new content type for partner's news called Partner News. To this we added normal title, body, date, and ID information and also another reference field for organization. What we really wanted was for the partner news nodes to automatically inherit the reference field from the importer that created them. So playing off the example above, we wanted each news item that was added to Drupal from the feed at news.apple.com to inherit the reference to the Apple organization node so that we could later create a view on the Apple organization node that displayed all the imported feeds associated with Apple.

The trick is that this function didn't exist. Lucky for us Feeds module provides some useful hooks to extend its base functionality.

The custom module we created is fewer than 50 lines if you ignore all the comments.

The following screen shots show a typical Feeds importer setup.

In this case I am attaching my feed importer to content type called importer. This means that when I create a new importer node, I will see that feeds has added a new field to the content type giving me a place to add the URL of the feed to be imported.

Under settings I designate that imported nodes should be Partner News nodes. That means that each item in an RSS feed that is imported will become its own Partner News node. I also set Feeds to update nodes rather than replace them or creating new ones if it finds duplicate data.

Finally we designate the mapping. The mapping defines to feeds what elements from an imported RSS element should be added to what part of the new node. Some of these are pretty obvious. We map title to title, date to date, description to body, and GUID to GUID. This last one (GUID) provides a unique identifier for updating feed data and is required if you want the nodes to update rather than duplicate.

But what we want doesn't exist. We want to see a source element that says something like “Feed Importer's Organization Reference” and a target that says something like “Organization Reference”, so that we can map from one to the other.

To do this start a custom module in the standard fashion (http://drupal.org/node/1074360). I'll call my module feedmapper. In feedmapper.module add the following function:

This adds a new source to the dropdown on the feed importer configuration.

The callback describes a function that will actually handle the data processing. You should prefix it with the name of you module but it can be named anything that makes sense. I haven't written this function yet, but we'll get to that shortly.

Note that the source and the target both reference the same field field_importer_reference. This is due to the fact that I am reusing the field across content types. If you had different field names for each content type, you would need to make the target and source point to the specific field names you created.

Now you can assign this mapping. Of course it doesn't do anything yet because we haven't written the appropriate callbacks.

The set method is actually pretty easy because feeds handles that for us providing we pass the correct data at first. We need to focus on retrieving the correct node Id from the feed importer. To do this we access a property of the feed object. This property is feed_nid which as you can guess returns the value of the feed's node id. But now that we have the nid, retrieving another field's data is fairly trivial, we just need to make sure we're using the correct type of node so we run a check on the node type and then get the field in question:

After being alerted to Google Fonts, the Google Font API, and the Google Fonts Module in a recent Drupal Planet post (http://acquia.com/blog/robert/google-fonts-api-time-drupal-market-one-day), I dropped my lunch and said, "Rad!" Then I rolled up the sleeves and dropped a few fonts into my blog as easy as the dog drops logs on the lawn. What follow is usage notes and examples on getting this all going for yourself:

Enable your new modules (admin/build/modules/list). Make sure you enable both Google Fonts and Google Fonts Ui.

Enable your desired fonts on the Google Fonts admin page (admin/settings/Google_fonts). Here you will find a list of all the available Google Fonts with a checkbox to select desired font. For my example I am enabling Droid Sans Mono and Lobster. When you have made your choices click 'Save configuration'.

Add the font to an element via CSS in two different ways:

• use the font directly in your stylesheet (.node h2{ font-family: "Droid Sans Mono"; }) I have not tried this because, this next way is easier

• or add a rule via the 'Add rules' tab (admin/settings/Google_fonts/rules). Here you will find textareas for each font that you previously selected. Enter your CSS selectors here to get going. In my example I am selecting for the dateline and the custom made tag marquee view at the bottom of my page.

Now check out my handy work in the dateline on all posts, and at the bottom of the page using the funky Lobster font on my Tag Marquee.

I frequently use a 3rd party designer to help with the tedious task of going from PSD to final theme. If you haven't realized it yet, but alot of designers have problems setting up a local MAMP install w/ drupal in which to fuck with css. To deal with this without giving the designer any command-line access, my shop uses what we call CZI on all drupal installs. This stands for CSS Injector, Zen theme, IMCE, and allows a designer to upload images and apply css rules to a development site they have been given permissions for on the theme, Zen, that provides all the classes and ids anyone would need.

After my shop, the designer, and the client are satisfied, CSS Injector and it's external files become a weight and need to be removed. Below I detail the process of using Zenophile (http://drupal.org/project/zenophile) to create a zen subtheme in which to wrap up all your CSS Injector files:

Create a subtheme using Zenophile

Enable module Zenophile

create a new zen subtheme (site building > themes > create zen subtheme)• name appropriately according to site url• set site directory to installs folder unless you want it available to other installs• create a fresh css file• Submit (you may need to chown the target directory to have appropriate permissions)

disable module zenophile

manage blocks for new theme (site building > blocks > list > newtheme)• save each block individually to have titles set appropriately

3. Implement hook_default_page_manager_pages()

The name of the file, very important, will be MODULENAME.pages_default.inc - and in our case ctools_defaults.pages_default.inc. Because we are clever, in it we'll do a little trick, to facilitate our lives, and have each page in its separate file. This will allow us to have an independent version control per page, and also make the re-exporting/editing of existing pages easier.

So what's special? not much, this code looks for .inc files in the directory called pages within our module, and for each of the files found, it tells panels that we have a default page. Each new file that we place in there will be loaded automatically.

4. Export our panel pages

The process is pretty simple, but in case someone gets lost, the button is right here when you are editing a panel page:

Which will get you to a page with all the code ready to be copied, like:

5. Create our PAGENAME.inc

Inside this file, we'll open php, and paste the code copied in the previous screen. Just like that, don't be shy and save the file.

6. Empty the cache

Panels caches (thankfully) the default pages that third party modules provide, so we must clear the cache when we create a new default page, and when we modify the code of an existing one.

Conclusion

This technique let's us sleep better at night. If someone ever touches the panel page, and breaks things, we can always revert to the default code. We'll be able to create pages based on existing ones if they are very similar, just by copying and modifying the original code, reducing our development time, and improving our personal relationships as an unexpected bonus.

If you have any comments, you know what to do. If you see a mistake in my technique, please do let me know and I'll fix it right away.

If you've ever used the drupal views module, chances are at some point you've needed to suppress any output until AFTER the user has made a selection from one of your exposed filters. Views actually DOES make this possible, but it's not exactly self-evident. I'm going to run you through a quick "howto" on this as I'm sure many people have needed it at some point.

As I mentioned above, this is possible but not particularly self-evident. Views has a number of different "global" group items. The most common of these is probably the Random Sort. Within arguments you also have another member of the global group, the global NULL argument. This is basically a way of attaching your own rudimentary argument to a view. Through the use of the default value (as custom php) and custom validation (again through php) you could cook up just about anything.

With our global NULL argument in place, the following settings are about all we need to make this really work:

1.) Provide a default argument2.) Argument type -> Fixed entry (Leave the default argument field blank as what gets passed is irrelevant to our needs, we simply needed to make it to the next level which is validation3.) Choose php code as your validator4.) Check through the $view->exposed_input array. I recommend using devel module's dsm() function here because it will respond on the ajax that view is using (unlike drupal_set_message()).5.) Set "Action to take if argument does not validate:" to "Display empty text"

You can get as fancy in step 4 as you need, but it's just down to good old php if statements at that point.

I hope this howto helps other people. We've found it rather useful, and since it's sort of arcane, I wanted to share it.

Thanks to Earl Miles (merlinofchaos) for pointing me in the right direction on this one!

Creating panels styles can be very powerful. You can define certain styles for your client to choose from, so they can choose what type of display the panel pane will be like. This way you keep the workflow clean, your code under revision control, your themer gets to keep his sanity, and your concious stays clear.

This article assumes you know about running panels, and more or less what the nomenclature is. You should know also that panels now uses ctools, which is is primarily a set of APIs and tools to improve the developer experience.

So, what we'll be doing here is actually creating a ctools plugin, to implement a new panels style. Sorry if I'm confusing you already, don't worry, it's actually quite straight forward, we want to be able to do this:

... and then this:

OK, now to the meat of it. We'll call our module ctoolsplugins.

1. Create a new module, and tell ctools about our plugins

What you need is very basic, an info file and a module file. So far, nothing new.

1.1 Declare our dependencies

So we obviously need ctools module on our site, and well, the plugin wouldn't make much sense without panels and page_manager, so:

3. Implement our style plugin in collapsible.inc

3.1 Define your style goals and necessities.

OK, Here you should think about what you are going to do with the plugin.

Is it just for markup?

Will you be offering different options?

Will you be implementing javascript on it?

In our case, we'll take the opportunity, to teach you another thing that ctools has, the collapsible div utility. So our style will basically convert any panels pane, into a collapsible panels pane:

And because we are friendly developers (or like not to be ever bothered after developing it), we'll give the user a chance to configure if they want the pane to start opened or closed. That means an extra settings form, so we can have this:

3.2 Imlement hook_panels_style_info

The naming is very important here, it should be modulename_stylename_panels_style. You basically return an array, defining your style:

'title' and 'description' are pretty self explanatory.'render pane' specifies the theme function we'll be providing for rendering the pane. Watch the naming convention.'pane settings form' specifies the callback function which provides the extra settings form, that we'll be using for our start up options. Watch the naming convention.

3.3 Define the settings form callback.

The name of the function will be what you specified in 'pane settings form' earlier. Just provide a new array inside $form, for each configuration you want the user to specify. See the FAPI documentation for reference.

This is pretty straight forward, in our case we provide two options, one to start opened and one to start collapsed. If collapsed is chosen, the value will be 1.

3.4 Define the render callback function.

The name of the function will be what you previously specified in 'render pane'. This is just a theme function, and where you have the chance of altering what will be shown to the user when viewing the page.

<?php/*** Render callback.** @ingroup themeable*/functiontheme_ctoolsplugins_collapsible_style_render_pane($content,$pane,$display) {$style_settings=$pane->style['settings'];// good idea for readability of code if you have a ton of settings$start_settings=$style_settings['collapsed'];// we can do this be cause the only values possible are 0 or 1$pane_content=$content->content;

if (

$content->title) {$pane_title='<h2 class="pane-title">'.$content->title.'</h2>';// theme('ctools_collapsible', $handle, $content, $collapsed);$result=theme('ctools_collapsible',$pane_title,$pane_content,$start_settings); }// if we don't have a pane title, we just print out the content as normaly, since there's no handleelse {$result=$pane_content; }

return

$result;}?>Important to note here, is that our user's specified settings are inside $pane->style['settings']. In this example we check if there's a title available, and if so we implement the ctools_collapsible theme function to get our collapsible panes. Other wise we don't have a handle, and we just return the content as normal.

And that is it. Hope you found the article useful, and if you'd like me to write up some more articles about writing plugins for panels/ctools, drop a comment with your question/suggestion!

UPDATE:You can also provide a style plugin in your theme, as shown in this fine tutorial.

CCK formatters are pieces of code that allow you to render a CCK field content how you want. In Drupal 6 this basicaly means a theme function.

As an example, we will build a formatter for the field type 'nodereference'.This type of field, which is part of the standard CCK package, allows you to "reference" a node inside another.The formatter that nodereference has by default, prints a standard link to the referenced node.

We are going to give the users other options, allowing them choose if they want the link to open in a new window or, if they have the popups module activated, that it opens in a jQuery modal window.

Let's call our module 'formattertest'.

Step 1: Declare our CCK formatters

<?php/*** Implementation of hook_field_formatter_info().** Here we define an array with the options we will provide in display fields page* The array keys will be used later in hook_theme and theme_*/functionformattertest_field_formatter_info() {$formatters= array('newwindow'=> array('label'=>t('Open in new window link'),'field types'=> array('nodereference'),'description'=>t('Displays a link to the referenced node that opens in a new window.'), ), ); if (module_exists('popups')) {$formatters['popup'] = array('label'=>t('Open in a popup window'),'field types'=> array('nodereference'),'description'=>t('Displays a link to the referenced node that opens in a jQuery modal window.'), ); } return$formatters;}?>In this function, you have to return an arrays of arrays, that define each formatter that the module provides.

label: The name that the user will choose in the display fields configuration page

field types: an array with the types of cck fields that the formatter supports.

It's important to remember that the array keys you use, in our case 'newwindow' and 'popup', will be used later on to construct our functions hook_theme and theme_.Note that in the second formatter, first we check if the module popups is active in the system, and then we add our formatter array that makes use of it.

2. Implement hook_theme

In hook_theme() you also return an array of arrays, defining the theme_ functions that will take care of rendering the cck field content. 'element' will be the content of the cck field, that will be used as the parameter for our theme function.

'formattertest_formatter_newwindow' and 'formattertest_formatter_popup' will be used to build our functions in the next step.

3. Build our theme functions.

Remember taht you can do dsm($element); (if you have devel installed), to see what you have to play with ;)

<?php/*** Theming functions for our formatters** And here we do our magic. You can use dsm($element) to see what you have to play with (requires devel module).*/functiontheme_formattertest_formatter_newwindow($element) {$output=''; if (!empty($element['#item']['nid']) &&is_numeric($element['#item']['nid']) && ($title=_nodereference_titles($element['#item']['nid']))) {$output=l($title,'node/'.$element['#item']['nid'], array('attributes'=> array('target'=>'blank_'))); } return$output;}/* Theme function for popup links */functiontheme_formattertest_formatter_popup($element) {$nid=$element['#item']['nid'];$link_id='popup-'.$nid;// we want an unique id for each link so we can tell popups api to only do those =)$output=''; if (!empty($nid) &&is_numeric($nid) && ($title=_nodereference_titles($nid))) {$output=l($title,'node/'.$nid, array('attributes'=> array('id'=>$link_id))); }popups_add_popups(array('#'.$link_id)); return$output;}?>In the first function, we start from the formatter that nodreference has by default, and we just add a target="_blank" so that the browser opens it in a new window.

In the second function, first we put inside the variable $nid the nid of the referenced node, in order to build the id that we'll use on the link ($link_id). We need this so that we can tell popups to only use the js on those specific link. That way we avoid having to scan the whole document for popup links, making our site faster in the front end.

Conclusion.

Imagine for example, that your module also provides a default view. You can then use this view to pull out information depending on the content of a cck field. Any cck field that is using your formatter. No longer would you have to write complex and hard to maintain code in your template.php. You could just assign your formatter to any new field you create on any content type, reusing the same code.

Developers are all familiar with the default behavior of the drupal menu systems "local tasks" (aka tabs). These appear throughout most Drupal sites, primarily in the administration area, but also on other pages like the user profile.

Generally, developers are pretty good about creating logical local tasks, meaning only those menu items which logically live under another menu item (like view, edit, revisions, workflow, etc... live under the node/% menu item).

But sometimes, these tabs either don't really make sense as tabs or you simply want to have the flexibility of working with the items as "normal menu items", or those menu items which appear under admin/build/menu.

I recently wanted to move some of the tabs on the user profile page (user/UID) into the main menu so that I could include them as blocks.

For some reason, developers think the user profile page is a great place to put tabs for user related pages such as friendslist, tracker, bookmarks, notifications and so on. But these types of items are less a part of the user's account information than they are resources for specific users. Personally, I would not think to look at my account information on a site to find stuff like favorites or buddies. I'd expect those items to be presented somewhere much more obvious like a navigation block.

Initially, this may seem like a trivial task. My first thought was to simply use hook_menu_alter() and change the 'type' value of the menu item from MENU_LOCAL_TASK to MENU_NORMAL_ITEM. However, for reasons I don't understand well enough to explain in detail, this does not work.

In order to achieve the desired result, you must change the path of the menu item and incorporate the '%user_uid_optional' argument, replacing the default '%user' argument.

All very confusing, I know. Let's look at an example.

The notifications module (which provides notification on changes to subscribed to content) uses the user profile page rather heavily. I don't want its links there, I want them in the sidebar where users can always see them.

So I have moved the notifications menu into my own menu, changed the type, used %user_uid_optional instead of %user, and unset the original menu item.

This works fine except for the fact that you'll lose all of the other menu items under user/%user/notifications! You need to account for all menu items in the hierarchy to properly reproduce the tabs in the main menu system, so we add the following:

And of course, we don't want this code executing at all if our module is not enabled, so you'd want to wrap the whole thing in:

<?phpif (module_exists('notifications')) {

<SNIP>

}

?>

Keep in mind that not all modules implement menu items using hook_menu(). It's becoming more and more common for developers to rely on the views module to generate menu items, and this is a wise choice. Menus generated using views (ala bookmark module) can be modified to get the desired result without any custom code.

I have had a seemingly constant battle with webdav for some reason over the years. We use it to hold Kate's graduate work and temporary work storage so it is constantly needed. TO its credit the problems ive had have never been its own making but rather a conflation of issues from installing so many services on top of eachother (Subversion, Drupal, Webdav). My most recent battle involved not being able to write to the webdav folder that existed as a subdirectory of a drupal site. THe READ, DELETE and other actions worked fine but the PUT action failed because drupals mod_rewrite routine jumped in and stole the show. The result was a 403 error in my webdav client and for all you google searchers out there the error log parroted out something like this:

The solution was simple if not intuitive. Disable the mod_rewrite for this directory and all will be fixed. This can be done by adding an exclusion to the drupal .htaccess file in the mod_rewrite section or by disabling it completely in a .htaccess file that resides right in webdav. Both solutions are shown below but only one needs to be used:

Initial Organization and Setup

I have drupal installed to the root directory (httpdocs) of my virtual host such that www.example.com pulls up drupal. I have created a folder in this directory called "webdav" and mapped an alias from "webdav" to "dav". Like so:

This was necessary for some old MS windows webdav clients if I remember correctly but the mod_rewrite part only cares about the functioning webdav directory I have defined. If you have an otherwise functioning webdav directory without using the <Location> tags in your vhost file then don't worry about it and just use that directories name.

Parent Directory method

File: .htaccess of parent directory which is also the root directory in this case
# Rewrite URLs of the form 'x' to the form 'index.php?q=x'.
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !=/favicon.ico
RewriteCond %{REQUEST_URI} !^/dav/(.*)$
RewriteRule ^(.*)$ index.php?q=$1 [L,QSA]

I only added the 4th one to the existing ones in the drupal .htaccess file but, these 4 lines of rewrite conditions basically say:

If the requested URI isnt a real file (line 1), and if it isnt a real directory (line 2) and if it isnt "/favicon.ico" (line 3) and finally (line 3) if it doesnt (!) start with "/dav/" (^/dav/) followed by any number of other characters ( (.*) ) until the end ($). Here is a useful cheatsheet for mod_rewrite that i like to use if your matching conditions vary from mine.

Webdav directory method

This one need much less explanation I think but it is also slightly less insecure and messy in my opinion. Either way you go you'll have better luck than you did before I promise. Good luck and comment if you have a question or this was useful to you.

I began the Devbee website back in March as a way to help others by way of documenting what I have learned about Drupal and also to drum up a little bit of business for myself. The content of this site is extremely targeted, and I don't ever expect to see more than a few hundred visits a day. This definitely does not reflect the expectations, or at least hopes, of most website owners. It's typically all about bringing in as many visitors as possible to generate money through advertising or purchases. Sites interested in bringing in large numbers of visitors typically do this by spending a lot of time focusing on "search engine optimization" (SEO). Absolutely nothing can drive traffic to a site like a top placement in the search results on one of the major search engines.

Back in the day (way back during the last millennium), all one needed to do was have a simple HTML page containing relevant words or phrases and he was fairly likely to make a decent showing in results pages. In fact, this is exactly how I shifted from studying literature to building websites. I built my first homepage (don't laugh!) for fun. It was found by an employer, and I got a cool job at a major search engine. Today, it is not so simple.

Fortunately for us, as Drupal users, we have a secret weapon, Drupal itself. Drupal SEO does not require any witchcraft or elaborate HTML trickery. It's simple, and in this article, I'm going to explain how I get consistent premium search placement with very little effort.

Stumbling upon Drupal SEO

Today I discovered that an article I wrote recently is the top result for the query "opcode cache" on Google. I almost feel guilty about it. There are countless pages out there with much more information on the topic than my article, yet I'm at the top. I guess I'll just have to deal with it.

This is not unusual. I find myself on "the first page" of many searches for terms relevant to my site. And when I'm not seeing a premium placement (top-ten), it's either because the search term is very broad (e.g. "Drupal") or there are simply much more relevant pages pushing my placement down. Just like the old days.

And more than half of my very modest traffic comes through these search results.

What's the Secret?

Now comes the mysterious part. I make no claims of expertise in the area of SEO. It's mostly voodoo as far as I'm concerned. The search engines are necessarily very secretive about their methods, trying to stay ahead of search engine spammers. And what works today may be detrimental tomorrow. What I'm going to describe below is entirely based on my own, very subjective, experience with various techniques and modules. These are the things that I believe are resulting in my accidental SEO success.

Drupal SEO

Drupal itself is well-known for its search-engine friendliness. Its markup is clean and standards-compliant. It creates all the tags the engines are looking for. And unlike so many other CMSs, Drupal creates search engine friendly URLs. Using Drupal is the first step in this process, but presumably you're already doing this, so let's move on.

The Right Path

Do you notice a difference? Can you tell me anything about the Joomla article without going to the page? In fact you can, sort of: you might conclude that the page covers a topic, a fact of dubious value. The URL really provides no useful information to you. Nor does it provide anything useful to a search engine. This is key. Unless you're searching for "index topic 65.0 html", this URL isn't going to help you find the information on this page.

Looking at the Drupal URL is another story. Based on that URL, one can assume that it has something to do with "drupal 5.0 beta1", and so can a search engine. If that's what you're looking for, this page will come up #1.

Drupal allows you complete control of the path of any page. Creating short, clean and informative paths will improve your rankings. And the Pathauto module automates the process of generating relevant paths. But be extremely careful when experimenting with Pathauto, particularly on sites with existing content. Using Pathauto without first understanding how to use it properly can result in all of the URLs on your site changing, and thereby breaking existing links to your content. If you are going to introduce Pathauto on an existing site, play it safe and enable the Create a new alias in addition to the old alias option in Pathauto's settings. But keep in mind that having multiple URLs pointing to the same page on your site may result in a search engine penalty for "duplicate content".

Sitemaps

Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional meta data about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.

I've seen no solid evidence that implementing a sitemap will directly improve search rankings. However, even if search engines do not use your sitemap to to adjust the ranking of your pages (which I doubt), it does help them more efficiently index your site, thereby increasing the likelihood of your pages being included in search results. This one's a no-brainer.

Sitemaps would be virtually impossible to maintain by hand. And this is where the excellent XML Sitemap (formerly Google sitmeap) module comes in. Installing this module is simple and comes with reasonable default settings that don't require changing unless you want to fine tune your sitemap. After you've installed and enabled this module, you'll need to tell search engines about your sitemap. At this point, I'm only familiar with Google Sitemaps, Though other major companies are beginning to adopt this concept as an new open standard.

Leaving Comments

Another common method used by search engines to determine the importance of your pages is the number of other sites that link to them. A simple way to continually promote your site while helping improve your search rankings is to make regular comments on other sites like Drupal.org. Take the time to create an account on sites similar to yours and complete your public profile. Then leave useful comments where appropriate. Do not post comments simply to include a link back to your site. This is in very poor taste and may get you blocked. Instead, post comments where you have something to contribute to the topic being discussed. If you have nothing useful to add, don't post a comment. I'm a regular participant over at Drupal.org, and I'm confident this helps the "relevance" of my own site.

Page Title

By default, Drupal will use the title of your node as the page HTML title (the bit that appears in the <title></title> tags of the HTML and shows up in the title bar of your browser). This is very reasonable behavior. However, if you want to give your page that extra SEO boost, you may want to allow for two different page titles, one that appears at the top of the page in <h1> tags and the other that appears in the head of the HTML document in the <title> tag. the <h1> and <title> tags are both pieces that search engines will consider when reviewing your page. If they are identical, you're missing out on an opportunity to further promote the page!

So how do you manage to control the <title> tag contents if Drupal automatically sets it based on the node title? The Page Title module does this. Install and enable this module, and you will see an additional field on the node edit form called "page title". Use this field to configure the phrase that you think will most likely attract users to the page. Use something eye catching and alluring, something the user will feel he has to read. If you're writing about an article you found on another site, don't title the page "cool link!", instead, something more enticing: "Fascinating study of the Indonesian spotted tadpole". Follow that up with a relevant <h1> title: "National Geographic looks at one of nature's most mis-understood wonders".

The Prophecy

Search result placement was not a top concern of mine when I built this site. But it has become a bit of an obsession now. I have no need to drive thousands of visitors seeking information on opcode caching to my site, but hitting that number one position for a query is a bit of a rush! Thanks Drupal!

Lastly, I asked myself a question as I wrote this article: Is there anything at all to what I'm saying? Well, I think there is, and I'm willing to make a bold prediction based on this belief. Within three days of posting this article, I believe it will appear in the top-ten search results for "Drupal SEO" on Google. If I'm right, that should serve as some pretty solid evidence that there's something to all this. There are currently 1,090,000 pages competing for placement in this results page. The odds of making it into the top-ten by shear luck are 1 in 109000.

And if I'm wrong, well, I can always come back and edit out this prediction to save face %^)

The Revelation

Update: Mon Nov 27 23:19:42 2006

A search for "Drupal SEO" now shows this article as the second result out of 1,080,000 pages. I come in just below an article on Drupal.org.

So as you now see, there is not a lot of work involved in getting premium search placement if you are using Drupal. Of course, the broader your topic, the more difficult it will be to hit the top-ten. While you can almost certainly hit number one for surfers searching for a certain rare antiquity, your less likely to see much success attracting surfers hunting for the term "sex".

Until the mid 90s, spam was a non-issue. It was exciting to get email. The web was also virtually spam-free. Netizens respected one another and everything was very pleasant. Those days are long gone. Fortunately, there are some pretty amazing tools out there for fighting email spam. I use a combination of SpamAssassin on the server side and Thunderbird (with its wonderful built in junkmail filters) on the desktop. I am sent thousands of spam messages a day that I never see thanks to these tools.

But approximately five years ago, a new type of spam emerged which exploited not email but the web. Among this new wave of abuse, my personal favorite, comment spam.

I love getting comments on my blog. I also like reading comments on other blogs. However, it's not practical to simply allow anyone who wants to leave a comment, as within a very short period of time, blog comments will be overrun with spam generated by scripts that exploit sites with permissive comment privileges. To prevent this, most sites require that you log in to post a comment. But this may be too much to ask of someone who just wants to post a quick comment as they pass through. I often come across blog postings which I would like to contribute to, but I simply don't bother because the site requires me to create an account (which I'd likely only use once) before posting a comment. Not worth it. Another common practice is the use of "captchas" which require a user enter some bit of information to prove they are human and not a script. This works fairly well, however, it is still a hurdle that must be jumped before a user can post a comment. And as I've personally learned, captchas, particularly those that are image based, are prone to problems which may leave users unable to post a comment at all.

As email spam grew, there were various efforts to implement similar types of protection, requiring by the sender to somehow verify he was not a spammer (typically by resending the email with some special text in the subject line). None of these solutions are around anymore because they were just plain annoying. SpamAssassin and other similar tools are now used on most mail servers. Savvy email users will typically have some sort of junkmail filter built into their email client or perhaps as part of an anti-virus package. And spam is much less a nuisance as a result.

What we need for comment spam is a similar solution. One that works without getting in the way of the commenter or causing a lot of work for the blog owner. Turn it on, and it works. I've recently come across just such a solution for blogs which also happens to have a very nice Drupal module so you can quickly and easily put this solution to work on your own Drupal site.

Enter Akismet

It's called Akismet, and it works similarly to junkmail filters. After a comment (or virtually any piece of content) has been submitted, the Akismet module passes it to a server where it is analyzed. Content labeled as potential spam is then saved for review by the site admin and not posted to the blog.

Pricing

Akismet follows my absolute favorite pricing model. It's free for workaday Joes like me and costs money only if you're a large company that will be pumping lots of bits through the service. They realize that most small bloggers are not making any money on their sites, and they price their service accordingly. Very cool.

Installation

In order to use Akismet, you need to obtain a Wordpress API key. I'm not entirely sure why, but it is free and having a collection of API keys is fun. So get one if you have not already.

The Akismet Drupal module is appropriately named Akismet. It's not currently hosted on Drupal.org, but hopefully the author will eventually host it there as that is where most people find their Drupal modules. Instead, you will need to download the Akismet module from the author's own site. The installation process is standard. Unzip the contents into your site's modules directory, go to your admin/modules page and enable it. There is no need for additional Akismet code as all the spam checking is done on Akismet's servers.

Configuration

After installing Akismet, I was immediately impressed at how professional the module is. There were absolutely no problems after installation. Configuration options are powerful and very well explained. The spam queue is very nice and lets you quickly mark content as "ham" (ie not spam) and delete actual spam. As you build up a level of trust with the spam detection, you can configure the module to automatically delete spam after a period of time.

Spam filtering can be enabled on a per node type basis, allowing you to turn off filtering for node types submitted by trusted users (such as bloggers) and on for others (eg forums users). Comment filtering is configured separately.

Another sweet feature is the ability to customize responses to detected spammers. In addition to being able to delay response time by a configureable number of seconds, you can also configure an alternate HTTP response to the client, such as 503 (service unavailable) or 403 (access denied). Nice touch.

One small problem

I've only been working with Akismet for several days now. And I'd previously been using captcha, which I imagine got me out of the spammers sights for a while (spammers seem to spend most of their efforts on sites where their scripts can post content successfully). So far, Akismet has detected 12 spams, 2 of which were not actually spam. These were very short comments, and I imagine Akismet takes the length of the content into consideration. I assume that as the Akismet server processes more and more pieces of content, it will become more accurate in picking out spam versus legitimate content. Each time a piece of flagged content is marked as "ham", it is sent to Akismet where it can help refine their rule sets and make the service more accurate.

Perhaps Akismet could provide an additional option that allows users to increase or decrease tolerance for spam. I would prefer to err on the side of caution and let comments through.

PHP is an interpreted language. This means that each time a PHP generated page is requested, the server must read in the various files needed and "compile" them into something the machine can understand (opcode). A typical Drupal page requires more than a dozen of these bits of code be compiled.

Opcode cache mechanisms preserve this generated code in cache so that it need only be generated a single time to server hundreds or millions of subsequent requests.

Enabling opcode cache will reduce the time it takes to generate a page by up to 90%.

PHP is known for its blazing speed. Why would you want to speed up your PHP applications even more? Well, first and foremost is the coolness factor. Next, you'll increase the capacity of your current server(s) many times over, thereby postponing the inevitable need to add new hardware as your site's popularity explodes. Lastly, high bandwidth, low latency visitors to your site who are currently seeing page load times in the 1-2 second range will be shocked to find your vamped up site serving up pages almost instantaneously. After enabling opcode cache on my own server, I saw page loads drop from about 1.5 seconds to as low as 300ms. Now that's good fun the whole family can enjoy.

Opcode Cache Solutions

There are a number of opcode caching solutions. For a rundown on some of them, read this article. After a bit of research and a lot of asking around, I concluded that Eaccelerator was the best choice for me. It's compatible with PHP5, is arguably the most popular of its kind, and is successfully used on sites getting far more traffic than you or I are ever likely to see.

Implementing Eaccelerator

This is the fun and exciting part. Implementing opcode cache is far easier than you might imagine. The only thing you'll need is admin (root) access to your server. If you're in a shared hosting environment, ask your service provider about implementing this feature if it is not in place already. These instructions apply to *nix environments only.

Poor Man's Benchmarking

If you would like to have some before and after numbers to show off to your friends, now is the time to get the 'before' numbers. Ideally, you will have access to a second host on the same local network as your server so that the running of the test does not affect the results. For those of us without such access, we'll just have to run the test on the actual webserver, so don't submit these results in your next whitepaper:

Apache comes with a handy benchmarking tool called "ab". This is what I use for quick and dirty testing. From the command line, simply type in the following:

Configure Apache and Restart

If you have an /etc/php.d directory, create the file /etc/php.d/eaccelerator.ini for your new settings. Alternatively, you can put them in your php.ini file. Your configuration should look something like this:

Adjust values according to your particular distribution. For more details on configuring eaccelerator, see the settings documentation.

See Eaccelerator in Action

The value eaccelerator.allowed_admin_path, if enabled, should point to a web accessible directory with a copy of 'control.php' (which comes with the eaccelerator source code). Edit this script, changing the username and password. You can then access this control panel and see exactly what eaccelerator is caching

See the results

After enabling Eaccelerator on devbee.com, I ran my benchmark again, and here are the results:

We are now serving up 95.49 requests per second. That's 754% increase in server capacity. Had I been able to run the tests from another machine on the same network, I believe the numbers would be even more dramatic.

One of the great features of Drupal is its ability to run any number of sites from one base installation, a feature generally referred to as multisites . Creating a new site is just a matter of creating a settings.php file and (optionally) a database to go with your new site. That's it. More importantly, there's no need to set up complicated Apache Virtual hosts, which are a wonderful feature of Apache, but can be very tricky and tedious, especially if you're setting up a large number of subsites.

No worries, there is a solution.

Create a new LogFormat

Copy the LogFormat of your choice, prepend the HTTP host field, and give it a name:

Problem

You have multiple (multisite) Drupal sites and you would like to manage the content for all of these sites through a single interface. Depending on the nature of a given piece of content, you may want the content published on one, several or all of your subsites, but you do not want to have to create copies of the same content for each site.

Solution

Taxonomy plus MySQL5 views. (NOTE: this solution will not work with versions of MySQL prior to 5.)

Assumming you have your subsites properly set up and running, the first step is to create a special vocabulary which you will use to target content.

Because the terms that serve as our subsite labels may very well exist within other vocabularies, we also need to join on the vocabulary table to ensure our solution works reliabley.

Finally, we need to have our subsites use the views we have created instead of our master nodes table, which only the "master" site will have access to directly.

In your drupal's sites directory, you should have directories that correspond to each of your drupal sites (both master and subsites). Edit the settings.php file for each of your subsites, and use the db_prefix variable to point the site to your view. So sites/foo.example.com/settings.php would contain the following:

$db_prefix = array( 'node' => 'foo_', );

At this point, you'll want to disable creation of content from within each of your subsites. You can do this in the from the admin/access page. If you attempt to create content from within the subsites, you'll likely get a 'duplicate key' error.

I hope that explanation is clear. These articles are written rather hastily. If you questions or suggestions regarding this solution, please leave a comment.