Feeds

Author

In just a few weeks the Norwegian Drupal association will host the annual Drupalcamp oslo (9-10th of November). If you have not already booked your tickets, now is the time!

Great featured speakers

We are very pleased with our program this year. In addition to the rest of the program, we are proud of our invited featured speakers:

Senior technical architect justafish from Lullabot is coming to speak about the JavaScript modernization initiative! If you are not already aware of the work going on in core in this area, don't miss this opportunity to get a first hand view at the exciting progress!

CEO and co-founder of 1xINTERNET baddysonja is having a session about how "Drupal is full of opportunities". Come and get inspired about the Drupal ecosystem, with a focus on contribution and volenteering!

Also joining us is security team member Stella Power, Managing Director and founder of Annertech.

Open source in the public sector

But not only that: The first half of Friday will be dedicated to the subject "open source in the public sector". It will be a segment that will be free to attend for everyone, trying to bring attention to the subject especially for Norway, where we still have a way to go in this area (my own subjective opinion). It will feature national and international case studies as well as Jeffrey A. “jam” McGuire talking about international trends.

What are you waiting for?

The preliminary program is available here, and we still have early bird tickets for just a few days more.

Today I encountered a problem I did not think about earlier. After I pushed a fix to a project I am working on, the CI builds started showing errors. And the problem was coming from a message like this:

The service "mymodule.attachments_manager" has a dependency on a non-existent service "metatag.manager".

In many cases when you see that, it probably means your module was installed before a module it depends on. For example, in this case, it would seem that this module depends on metatag, and so declaring it as a dependency would fix the issue. And for sure, it would. But sometimes dependencies are not black and white.

This particular service does some handling of the attachments when used together with metatag. It does so, because it is a module we use across projects, and we can not be sure metatag is used in any given project. So it's only used in a way that is something like this:

Now, what this means, is that for the attachments manager to be useful, we need metatag. If we do not have metatag, we do not even need this service. So basically, the service depends on metatag (as it uses the service metatag.manager), but the module does not (as it does not even need its own service if metatag is not installed).

Now, there are several ways you could go about fixing this for a given project. Creating a new module that depends on metatag could be one way. But today, let's look at how we can make this service have an optional dependency on another service.

Now, this could never work if this module was installed before metatag (which it very well could, since it does not depend on it). A solution then would be to make the metatag.manager service optional. Which is something we can do by removing it from the constructor and create a setter for it.

So there we have it! The service can be instantiated without relying on the metatag.manager service. And if it is available in the service container, the method setMetatagManager will be called with the service, and our service will have it available in the cases where we need it.

Now let's finish off with an animated gif related to "service container".

In many cases, a route name is something you might need to get something done in Drupal 8. For example to generate a link, or maybe create a local task in your module.

Some examples:

The path can be found in a routing.yml file

Some times you can just search for the path in your codebase, and then find the corresponding route name. Let's say that I wanted to link to the page mysite.com/admin/config/regional/translate. If I just search the codebase for this path, it would be revealed in the file locale.routing.yml:

So to conclude, the route name for that particular page is the key in the routing file, in this case locale.translate_page.

The path can not be found in a routing.yml file

Now, this is what I really wanted to write about in this blog post. Getting the route name directly from a routing file is simple enough, but where do you look if the path can not be found in a routing file?

Find route name with PHPStorm

My first trick is to utilize the IDE I use, PHPStorm.

Start by setting a breakpoint in index.php on the line that looks like this:

$response->send();

Next step, refresh your browser on the page you want to know the route name for, and hopefully trigger your breakpoint. Then you click on the icon for "evaluate expression". On my work computer this has the shortcut key alt-f8, but you can also find it in the debugger toolbar, or via the menu (Run -> Evaluate expression).

Then evaluate the following code:

\Drupal::routeMatch()->getRouteName()

That should give you the name of the route. As illustrated below in a gif:

Find route name with any development enviroment.

Now, I realize that not everyone uses PHPStorm, so here is one solution that should work without having xdebug and an IDE set up:

Following the same tactic as above, let's open up index.php again. Now, just change the following code:

Now visit the page you want to know the route name for. This will print the name of the route as the very first output of your Drupal site. Since you probably do not want this for your live Drupal site, this is best done on a development copy.

Find route name with Drupal Console

Drupal Console maintainer Jesus Manuel Olivas pointed out in the comments a rather cool way to browse the router list of routes:

If you are like me, you might have already started planning the upgrade to Drupal 8.5, now that the first release candidate is out. It's awesome by the way, among other things, thanks to the incredible work done with layout builder. And if you are more like me, you are managing your sites with composer. Then, depending on the rest of your project, you might (also like me), have encountered some initial problems upgrading to Drupal 8.5

Having hit my fair share of composer oddities with running the Violinist.io composer monitor and upgrade service, I wanted to compile a couple of error messages along with solutions, to the folks struggling with this out there.

The reason this fails is that the project you have created is depending on the dev packages for drupal core, which are tied to a specific version of core. So to update core, we also need to update the dev packages for core.

The solution to this is pretty simple:Open your composer.json file and replace the lines for drupal/core and webflo/drupal-core-require-dev with the following:

Edit: balsama pointed out in the comments that you can also run the command composer require drupal/core:~8.5.0 webflo/drupal-core-require-dev:~8.5.0 --no-update. However. This would move the package webflo/drupal-core-require-dev to "require" instead of "require-dev", which is probably not what you want. You could of course do a similar thing in 2 commands (one for require and one for require-dev), which would yield a similar result as updating by hand.

This probably comes from the fact that you also have some other packages depending on this specific Symfony package in your project. Like drush or drupal console. This would not be a problem in itself, were it not for the fact that drupal/core 8.5 relies on the package symfony/dependency-injection that has specifically listed "symfony/config": "<3.3.7", as a conflict. Here is a full error message, for reference:

The solution here is to indicate you also want to update this package, even if it's not specifically required. So if the failing command was the following:

composer update drupal/core --with-dependencies

Go ahead and change it to this:

composer update drupal/core symfony/config --with-dependencies

Edit: To clarify based on a comment from balsama: It does not help to add the flag --with-all-dependencies here, since it is not related to a "sibling" dependency, or a nested dependency of the packages to be updated.

If you have other error messages, I would be glad to help out with a solution, and post the result here.

Violinist.io is a new service that is continuously trying to update your composer dependencies. When a new update is found, a pull request is created on the github repo for the project in question, for example your Drupal site. If you have a good testing setup, this will trigger your tests, and hopefully pass. Now, if you have continuous deployment set up, you can basically merge and deploy updates while sitting in a coffee shop on your phone. Which is now something I have done several times!

I am planning to write a longer blog post about a more complete continuous deployment setup, but just wanted to share a couple of quick fun animated gifs about how Violinist.io works

A couple of weeks ago a new version of Drupal console came out. After it was tagged on Github, an update was available through composer. Since Violinist picked this up, it opened up a new pull request on all of my projects that depend on this. That looks something like this:

I captured this animation because I was fascinated about the short time window between the release and the pull request. As you can see in the animation, it was only around 10 minutes! Now all left for me was to see that the tests passed, read through the changelog (including links to all commits) and merge in the update. Minutes after it was automatically deployed to the production server. About as easy as it gets!

But it's not only other Github hosted projects, or generic php packages that gets updated. For a typical Drupal project I also depend on modules from Drupal.org, and I download these modules with composer. Violinist.io supports those as well. Here is one example (from this very site you are reading) where a new pull request with a full changelog was posted only 8 minutes after it was released on Drupal.org.

Since admin_toolbar is a module I use on many projects, I now could just navigate from pull request to pull request, and update all of my sites within minutes, while still on my phone. A real time saver!

Full disclosure: As you probably understand from the enthusiastic description, I am also the creator of the service. It is completely free for open source projects, and up to one private project. Feel free to reach out if you have any questions or comments! To finish it off, here is an animated gif about enthusiasm.

The annual meeting of Drupal enthusiasts in Norway (and elsewhere) will take place on the 11th of November, with community sprints happening November 12th.

Every year, the camp attracts visitors from Drupal professionals and hobbyists from Norway, but also from the surrounding countries. If you want to meet Drupal entusiasts from our region, this is a great chance to do so.

We also want to invite people who wants to speak to submit their session proposals to our website. Whether you are a seasoned conference speaker or if you want to have your first session, you are very welcome to submit your talk to Drupal Camp Oslo.

If you prefer to attend, we are just as welcome to you as well! Tickets are now available for purchase, and at the moment they are extra early-bird cheap! We also have different price tiers if you attend as a hobbyist or student, and we hope for a diverse audience of attendees, both in the sessions and in the sprints on Sunday!

If you have any questions regarding the event, feel free to reach out to me in the comments, by mail or on twitter

In this part we will look at simplifying the request flow for the client, while still keeping a certain level of security for our endpoint. There are several ways of doing this, but today we will look at a suggestion on how to implement API keys per user.

Let me just start first by saying there are several additional steps you could (and ideally should) implement to get a more secure platform, but I will touch on these towards the end of the article.

First, let's look at a video I posted in the first blog post. It shows an example of posting the temperature to a Drupal site.

[embedded content]

Architecture

Let's look at the scenario we will be implementing

A user registers an account on our Drupal 8 site

The user is presented with a path where they can POST their temperatures (for example example.com/user/5/user_temp_post)

The temperature is saved for that user when that request is made

See any potential problems? I sure hope so. Let's move on.

Registering the temperature in a room and sending to Drupal

As last time, I will not go into details on the implementation of any micro controller or hardware specific details in the blog post. But the code is available on github. I will quickly go through the technical steps and technologies used here:

I use a Raspberry pi 2, but the code should work on any model Raspberry pi

I use a waterproof dsb18b20 sensor, but any dsb18b20 should work. I have a waterproof one because I use it to monitor my beer brewing :)

The sensor checks the temperature at a certain interval (per default, 1 minute)

The temperature data is sent to the Drupal site and a node is created for each new registration

To authenticate the requests, the requests are sent with a x-user-temp header including the API key

This scenario is a bit different from the very real time example in the video above, but it is both more flexible (in terms of having a history of temperatures) and real-life (since temperatures seldom have such real-time changes as the one above).

Receiving temperatures in Drupal

The obvious problem with the situation described above, is the authentication and security of the transferred data. Not only do we not want people to be able to just POST data to our site with no authentication, we are also dealing with temperatures per user. So what is to stop a person to just POST a temperature on behalf of another user? Last post dealt with using the same user session as your web browser, but today we are going to look at using API keys.

If you have ever integrated a third party service to Drupal (or used a module that integrates a third party service) you are probably familiar with the concept of API keys. API keys are used to specify that even though a "regular" request is made, a secret token is used to prove that the request originates from a certain user. This makes it easy to use together with internet connected devices, as you would not need to obtain (and possibly maintain) a session cookie to authenticate as your user.

Implementation details

So for this example, I went ahead and implemented a "lo-fi" version of this as a module for Drupal 8. You can check out the code at github if you are eager to get all the details. Also, I deployed the code on Pantheon so you can actually go there and register an account and POST temperatures if you want!

The first step is to actually generate API keys to users that wants one. My implementation just generates one for users when they visit their "user temperatures" tab for the first time.

Side note: The API key in the picture is not the real one for my user.

Next step is to make sure that we use a custom access callback for the path we have defined as the endpoint for temperatures. In my case, I went with making the endpoint per user, so the path is /user/{uid}/user_temp_post. In Drupal 7 you would accomplish this custom access check by simply specifying something like this in your hook_menu:

'access callback' => 'my_module_access_callback',

In Drupal 8, however, we are using a my_module.routing.yml file for routes we are defining. So we also need to specify in this file what the criteria for allowing access should be. For a very good example of this, I found the user.module to be very helpful. My route for the temperature POST ended up like this:

In this case '_access_user_temp_post' is what will be the criteria of allowing access. You can see this in the user_temp.services.yml file of the module. From there you can also see that Drupal\user_temp\Access\PostTempAccessCheck is the class responsible for checking access to the route. In this class we must make sure to return a Drupal\Core\Access\AccessResult to indicate if the user is allowed access or not.

Some potential questions about the approach

From there on in, the code for the POST controller should provide you with the answers you need. And if the code is not enough, you can try to read the tests of the client part or the Drupal part. I will proceed with making assumptions about theoretical questions to the implementation:

How is this different from using the session cookie?

It is different in 2 aspects. The API key will not expire for reasons beyond your control. Or more precisely, the device's control. You can also reset the API key manually if you would want it to expire. The other big difference is that if your API key should be compromised, your account is not compromised in any way (as would be the case if a valid session cookie were to be compromised). Beyond that, please observe that in one area this is not different from using a session cookie: The requests should be made over https, especially if you are using a wifi connection.

How can I further strengthen the security of this model?

One "easy" way to do this is to not expose the API key as part of the request. I was originally planning to implement this, but realised this might make my original point a little less clear. What I would do as a another "lo-fi" hardening would be to make the x-user-temp header just include a hash of the temperature sent and the user API key. This way, if someone were sniffing the requests, they would just see that the x-user-temp header would change all the time, and so it would take a considerable effort to actually forge the requests (compared to just observing the key in the header).

Why are you using nodes? Isn't that very much overhead for this?

This is a fair point. It might be a bit overkill for something so simple. But there are two bonus parts about using nodes:

We can use views to display our data.

We can ship the views, content types and fields as configuration with our module.

This last part is especially powerful in Drupal 8, and incredibly easy to accomplish. For the files required for this particular implementation, you can reference the config/install directory of the module.

But since you are posting nodes, why aren't you using the REST module?

I admit it, I have no good reason for this beyond that I wanted to make this post be about implementing API keys for authentication. Also, here is a spoiler alert: Code examples part 3 will actually be using the REST module for creating nodes.

What if I want to monitor both my living room and my wine cellar? This is only one endpoint per user!

I am sorry for not implementing that in my proof of concept code, but I am sure you can think of a creative solution to the problem. Also, luckily for you, the code is open source so you are free to make any changes required to monitor your wine cellar. "Pull requests welcome" as they say.

As always, if you have any question or criticism (preferably beyond the points made above) I would love to hear thoughts on this subject in the comments. To finish it all off, I made an effort to find a temperature related gif. Not sure the effort shows in the end result.

As promised, I am posting the code for all the examples in the article about Drupal and the Internet of Things. Since I figured this could be also a good excuse to actually examplify different approaches to securing these communication channels, I decided to do different strategies for each code example. So here is the disclaimer. These posts (and maybe especially this one) would not necessarily contain the best-practices of establishing a communication channel from your "thing" to your Drupal site. But this is one example, and depending on the use-case, who knows, this might be easiest and most practical for you.

So, the first example we will look at is how to turn on and off your Drupal site with a TV remote control. If you did not read the previous article, or if you did not see the example video, here it is:

The Drupal site has enabled a module that defines an endpoint for toggling the site maintenance mode on and off

The Drupal site is toggled either on or off (depending on the previous state).

See any potential problems? Good. Let's start at the beginning

Receiving IR and communicating with Drupal

OK, so this is a Drupal blog, and not a microcontroller or javascript blog. I won't go through this in detail here, but the full commented source code is at github. If you want to use it, you would need a tessel board though. If you have that, and want to give it a go, the easiest way to get started is probably to read through the tests. Let's just sum it up in a couple of bullet points, real quick:

All IR signals are collected by the Tessel. Fun fact: There will be indications of IR signals even when you are not pressing the remote.

IR signals from the same button are rarely completely identical, so some fuzzing is needed in the identification of a button press

Figuring out the "signature" of your "off-button" might require some research.

Configure the code to pass along the config for your site, so that when we know we want to toggle maintenance mode (the correct button is pressed), we send a request to the Drupal site.

Receiving a request to toggle maintenance mode

Now to the obvious problem. If you exposed a URL that would turn the site on and off, what is to stop any random person from just toggling your site status just for the kicks? Here is the part where I want to talk about different methods of authentication. Let us compare this to the actual administration form where you can toggle the maintenance mode. What is to stop people from just using that? Access control. You have to actually log in and have the correct permission (administer site configuration) to be able to see that page. Now, logging in with a micro controller is of course possible, but it is slightly more impractical than for a human. So let's explore our options. In 3 posts, this being the first. Since this is the first one, we will start with the least flexible. But perhaps the most lo-fi and most low-barrier entry. We are going to still use the permission system.

Re-using your browser login from the IR receiver

These paragraphs are included in case someone reading this needs background info about this part. If this seems very obvious, please skip ahead 2 paragraphs

Web apps these days do not require log-ins on each page (that would be very impractical), but actually uses a cookie to indicate you are still trusted to be the same user as when you logged in. So, for example, when I am writing this, it is because I have a session cookie stored in my browser, and this indicates I am authorised to post nodes on this site. So when I request a page, the cookie is passed along with it. We can also do the same passing of a cookie on a micro controller.

Sending fully authenticated requests without a browser

So to figure out how to still be authenticated as an admin user you can use your browser dev tools of your choice. Open a browser where you are logged in as a user allowed to put the site into maintenance mode. Now open your browser dev-tools (for example with Cmd-Alt-I in Chrome on a Mac). In the dev tools there will be a network tab. Keep this active while loading a page you want to get the session cookie from. You can now inspect one of the requests and see what headers your browser passed on to the server. One of these things is the header Cookie. It will include something along the lines of this (it starts with SESS):

SESS51337Tr0lloll110l00l1=acbdef123abc1337H4XX

Since I am a fan of animated gifs, here is the same explanation illustrated:

This is the session cookie for you session as an authenticated user on your site. Since we now know this, we can request the path for the toggle functionality from our microcontroller, passing this cookie along as the header, and toggle the site as we were just accessing it through the browser.

The maintenance_mode_ir module

So what is happening in that module? It is a very basic module actually mostly generated by the super awesome Drupal console. To again sum it up in bullet points:

It defines a route in maintenance_mode_ir.routing.yml (example.com/maintenance_mode_ir)

The route requires the permission "administer site configuration"

The route controller checks the StateInterface for the current state of maintenance mode, toggles it and returns a JSON response about the new state

The route (and so the toggling) will never be accessible for anonymous users (unless you give the anonymous users the permission "administer site configuration", in which case you probably have other issues anyway)

There are also tests to make sure this works as expected

When do you want to use this, and what is the considerations and compromises

Now, your first thought might be: would it not be even simpler to just expose a route where requests would turn the site on and off? We wouldn't need to bother with finding the session cookie, passing that along and so on? Legitimate question and of course true in the sense that it is simpler. But this is really the core of any communications taking place between your "things" and Drupal (or any other backend) - you want to make sure they are secured in some way. Of course being able to toggle the maintenance mode is probably not something you would want to expose anyway, but you should also use some sort of authentication if it only was a monitoring of temperature. Securing it through the access control in Drupal gives you a battle tested foundation for doing this.

Limitations and considerations

This method has some limitations. Say for example you are storing your sessions in a typical cache storage (like redis). Your session will expire at some point. Or, if you are using no persistence for redis, it will just be dropped as soon as redis restarts. Maybe you are limited by your php session lifetime settings. Or maybe you just accidentally log out of the session where you "found" the cookie. Many things can make this authenticated request stop working. But if all you are doing is hooking up a remote control reader to make a video and put on your blog, this will work.

Another thing to consider is the connection of your "thing". Is your site served over a non-secure connection and you are sending requests with your "thing" connected through a public wifi? You might want to reconsider your tactics. Also, keep in mind that if your session is compromised, it is not only the toggling of maintenance mode that is compromised, but the actual administrator user. This might not be the case if we were to use another form of authentication.

Now, the next paragraph presented to you will actually be the comments section. The section where you are encouraged to comment on inconsistencies, forgotten security concerns or praise about well chosen gif animations. Let me just first remind you of the disclaimer in the first paragraph, and the fact that this a serie of posts exploring different forms of device authentications. I would say the main takeaway from this first article is that exposing different aspects of your Drupal site to "the physical world", be it remote controlled maintenance mode or temperature logging, requires you to think about how you want to protect these exposed endpoints. So please do that, enjoy this complementary animated gif (in the category "maintenance"), and then feel free to comment.

The Internet of Things (or IoT for short) is probably even more of a buzzword than "Headless Drupal", but maybe not so much in Drupal land. As I am a man of buzzwords, let’s try to combine these things in one article (Also, there will be video demos)

Or maybe a couple of articles. I feeI have so much to say about this, partly because many articles one can read on the subject IoT deals with having your "thing" on your local network and playing with it over the wifi. We are not going to do that here, as your local network is not the internet. Of course you could forward your router settings to actually put that "thing" on the internet. And then that thing would be available to all hackers that would want to access it. So we are not going to that either in this article. For a first article, I want to explain how I see Drupal and IoT connecting together, and explore the patterns for this.

When we refer to “The Internet of Things” we often refer to devices capable of networking. This could be a car, cell phone or something like a Raspberry Pi, Arduino or Tessel. In this article I will simply refer to a "thing". This means a device we want to extract data from, or interact with via Drupal.

A common scenario

First, let’s look at a common way of testing out Internet of Things at home. You have a Raspberry pi and you have a temperature sensor. Now, the raspberry pi probably runs a flavour of linux, so you can actually install apache, mysql, php and finally Drupal on it. Then maybe you find a python script that reads the temperature, and you create a node in Drupal with the php filter that will exec this python script and print it inside Drupal. That will work. Except it is not a good idea. For several reasons. Let’s go through them:

Your "thing" will both be a sensor and a webserver. Make it do one thing, and focus on that

Your Drupal site will run on your local network, and to access it from (for example) your office you would have to make it publicly available in some way.

Your Drupal site will have the php module installed. You don’t want that.

Your php code will be doing system calls. Please don’t do that

Now that I have got that rant out of the way, let me also just say that if you want to go down that road, it is of course a low entry barrier, and if you are restricting access to your local network, then the security concerns are looking a little better. And of course, the php in a node part is not strictly necessary, it just fits with my arguments. So, if you are looking for that kind of tutorial, there are plenty others on the internet.

Patterns for communication

As I now have been ranting a bit, let me just point out that this is not a canonical article about IoT best practices or some absolute truths. These are just my opinions on how one could approach this exciting buzzword. Continuing on, let’s look at some ways of interacting between a Drupal site and your "thing.

If we look at it as simply one "thing" and one Drupal site, you have two actors in this communication model. So to establish a two-way communcation, we would both want the "thing" talking to Drupal, and Drupal talking to the "thing". This "thing" may represent something physical in the world, like a temperature sensor or a relay switch. So basically it is an interaction between the physical world and your site, so let’s use that metaphor. This article will deal with the first and most simple concept of this interaction:

The physical world talking to Drupal

Isolated, the wording of that heading looks kind of poetic, doesn’t it?

When the physical world is talking to Drupal, I mean it as somewhat of a “push” mode for your "thing". Let’s say you are monitoring the temperature in your apartment (physical world) and want to communicate this to your Drupal site. A simple thing to do here is to define an endpoint in your Drupal site where your "thing" would just post updates, and the Drupal site would store the information (of course with some sort of authentication). A one-way communcation to push updates from the physical world to Drupal.

Another theoretical and more intricate form could be something like this:

Say you have a physical store that also is an online store (a typical use case for Drupal). And you are about to have a sale in both places. But you want visitors in the store to have the same oppurtunity as the online visitors to get the good deals. In this scenario you could make it so that the moment the lock on the door was opened, a request is sent to the online store enabling the "sale mode". And when you close in the afternoon, the online store "sale mode" automatically gets disabled. This way, the physical store (or more precisely, the lock on the physical store) actually dictates the state of the online store.

Granted, this is a theoretical example, so let’s look at practical and implemented examples instead. I have put together a few quirky demos with varying degrees of usefulness. All examples are actual Drupal 8 sites running on Pantheon, so there is no localhost Drupal instance to talk about. This is the physical world talking to the internet.

Remote controlled Drupal 8

The first one kind of reminds of the above example, although maybe not so useful. It is a remote control to "shut down" the Drupal site (put it in maintenence mode). Or more precise: I am turning off the site with my TV remote. If you are wondering why the site refreshes a couple of times, it is because since I used one hand to film and one hand to press the remote, I had the site just update itself every 2 seconds.

[embedded content]

Temperature monitoring

The second one is a more common one. Presenting the current temperature at a path in Drupal. Here we are just polling for updates to make the video actually show that it works, but a more practical example is probably to post updates every 10 or 30 minutes. Also note that now we can view the temperature from anywhere in the world, while still having our device unreachable over the internet. If you are wondering why am using water, it is because this triggers temperature changes much faster. The glass contains cold water, the cup holds warm water.

[embedded content]

And here is a third one. Since I felt like being silly. This one displays the temperature, draws a nice historical graph of the temperature, and changes the color of the header based on the temperature. I must admit that the last part is purely client side in the video, but could theoretically be expanded to actually do this through the color module. I also must admit that the actual hot/warm color calculation could use some tweaking (more than the 6 minutes used on it), but you probably get the picture

[embedded content]

Drupal 8 as a "surveillance backend"

For the last example there is something a little more elaborate, and maybe even practical. It uses a sound sensor to listen for sound changes. When the sound trigger is triggered, it takes a picture with the webcam on my mac (you can see the light next to the camera after I snap my finger), posting it to my Drupal site, creating a node. A simple surveillance camera with Drupal as a backend. Also, a very concrete example of the physical world interacting with Drupal, as it is the snapping of my finger (very physical) that creates a node in Drupal.

[embedded content]

This article is already getting pretty lengthy, so I'm going to end it here. And before you ask: No, the code for the examples are not yet available. And yes, it will be made available. As I said, all these examples were put together quickly on a sunday morning, and they are all very hardcoded and hackily put together. I will post an update here, and probably a code-dedicated blog post about just that.

Also, I will be following up with the next scenario: Interacting with the physical world from Drupal. If you have any questions, please feel free to ask them in the comments. And I would very much be delighted to hear about alternative ways of doing this, people doing similar things or other thoughts on the subject (rants or ideas).

The ending of this post will be a lo-fi gif describing what sceptics usually call the Internet of Things - The Internet of Lightbulbs. Have a nice week!

If you click the link you can see an animated gif of how I edit the Bartik node template and it reflects in a simple single page app. Or one of these hip headless Drupal things, if you want.

So I thought I should do a quick write up on what it took to make it work, what disadvantages comes with it, what does not actually work, and so on. But then I thought to myself. Why not make a theme that incorporates my thoughts in my last post, "Headless Drupal with head fallback". So I ended up making a proof of concept that also is a live demo of a working Drupal 8 theme with the first page request rendered on the server, and the subsequent requests rendered fully client side. They both use the same node template for both full views and the node listing on the front page. So if you are eager and want to see that, this is the link.

Next, let's take a look at the inner workings:

Part 1: Twig js

Before I even started this, I had heard of twig.js. So my first thought was to just throw the Drupal templates to it, and see what happened.

Well, some small problems happened.

The first problem was that some of the filters and tags we have in Drupal is not supported out of the box by twig.js. Some of these are probably Drupal specific, and some are extensions that is not supported out of the box. One example is the tag {% trans %} for translating text. But in general, this was not a big problem. Except that I did as I usually do when doing a POC. I just quickly threw together something that worked, resulting for example in that the trans tag just returns the original string. Which obviously is not the intended use for it. But at least now the templates could be rendered. Part one, complete.

Part 2: Enter REST

Next I needed to make sure I could request a node through the REST module, pass it to twig.js and render the same result as Drupal would do server side. This turned out to be the point where I ended up with the worst hacks. You see, ideally I would just have a JSON structure that represents the node, and pass it to twig.js. But there are a couple of obvious problems with that.

Consider this code (following examples are taken from the Bartik theme):

This is unproblematic. If we have a node.url property and a node.label property on the object we send to twig.js, this would just work out of the box. Neither of these properties are available like that in the default REST response for a node, however, but a couple of assignments later, that problem went away as well.

Now, consider this:

{{ content|without('comment', 'links') }}

Let's start with the filter, "without". Well, at least that should be easy. We just need a filter that will make sure comment and links properties on the node.content object will not be printed here. No problem.

Now to the problem. The content variable here should include all the rendered fields of the node. As was the case of label and url, .content is not actually a property in the REST response either. This makes the default output from the REST module not so usable to us. Because to make it generic we would also have to know what fields to compose together to this .content property, and how to render them. So what then?

I'll just write a module, I thought. As I often do. Make it return more or less the render array, which I can pass directly to twig.js. So I started looking into what this looked like now, in Drupal 8. I started looking at how I could tweak the render array to look more or less like the least amount of data I needed to be able to render the node. I saw that I needed to recurse through the render array 0, 1 or 2 levels deep, depending on the properties. So I would get for example node.content with markup in all its children, but also node.label without children, just the actual title of the node. Which again made me start to hardcode things I did not want in the response, just like I just had started hardcoding things I wanted from the REST response.

So I gave up the module. After all this is just a hacked together POC, so I'll be frank about that part. And I went back to hardcoding it client side instead. Not really the most flexible solution, but at least - part two: complete.

Part 3: Putting the pieces together

Now, this was the easy part. I had a template function that could accept data. I had transformed the REST response into the pieces I needed for the template. The rest was just adding a couple of AJAX calls and some pushState for the history (which reminds me. This probably does not work in all browsers at all). And then bundling things together with some well known front-end tools. Of course, this is all in the repo if you want all the details.

Conclusions

Twig on the server and on the client. Enough said, right?

Well. The form this demo is now, this is not something you would just start to use. But hopefully get some ideas. Or inspiration. Or maybe inspire (and inform) me of the smartest way to return a "half-rendered render array".

Also, I would love to get some discussion going regarding how to use this approach in the most maintainable way.

I'm going to end this blog post with a classy gif from back in the day. And although it does not apply in the same way these gifs were traditionally used, I think we can say that things said in this blog post are not set in stone, neither in regards to construction or architectural planning.

First of all, let's examine in what way this simple blog is headless. It is not headless in the way that it offers all the functionality of Drupal without using Drupals front-end. For example, these words I am typing is not typed into a decoupled web-app or command-line tool. Its only headless feature is that it loads content pages with ajax through Drupal 8's new REST module. Let's look at a typical set-up for this, and how I approached it differently.

A typical setup

A common way to build a front-end JavaScript application leveraging a REST API, is using a framework of your choice (backbone / angular / or something else *.js) and build a single-page application (or SPA for short). Basically this could mean that you have an index.html file with some JavaScript and stylesheets, and all content is loaded with AJAX. This also means that if you request the site without JavaScript enabled, then you would just see an empty page (except of course if you have some way of scraping the dynamic content and outputting plain HTML as fallback).

Head fallback

I guess the "headless" metaphor sounds strange when I change it around to talk about "head fallback". But what I mean with this is that I want a user to be able to read all pages with no JavaScript enabled, and I want Drupal (the head) to handle this. All URLs should also contain (more or less) the same content if you are browsing with JavaScript or without it. Luckily, making HTML is something Drupal always has done, so let's start there.

Now, this first part should be obvious. If a user comes to the site, we show only the output of each URL as intended with the activated theme. This is a out-of-the box feature with Drupal (and any other CMS). OK, so the fallback is covered. The next step is to leverage the REST module, and load content async with AJAX.

Head first, headless later

A typical scenario would be that for the front page I would want to request the "/node" resource with the header "Accept:application/hal+json" to get a list of nodes. Then I would want to display these in the same way the theme displays it statically on a page load. The usual way of doing this is that when the document is ready, we request the resource and build and render the page, client side. This is impractical in one way: You are waiting to load the entire document to actually render anything at all. Or maybe even worse: You could be waiting for the entire /node list to load, only to destroy the DOM elements with the newly fetched and rendered JSON. This is bad for several reasons, but one concrete example is a smart phone on a slow network. This client could start rendering your page on the first chunk of html transferred, and that would maybe be enough to show what is called the "above the fold content". This is also something that is a criteria in the often used Google PageSpeed. Meaning in theory that our page would get slower (on first page load) by building a SPA on top of the fallback head.

It is very hip with some "headless Drupal" goodness, but not at the cost of performance and speed. So what I do for the first page load, is trust Drupal to do the rendering, and then initializing the JavaScript framework (Mithril.js in my case) when I need it. Let's take for example you, dear visitor, reading this right now. You probably came to this site via a direct link. Now, why would I need to set up all client side routes and re-render this node when all you probably wanted to do, was to read this article?

Results and side-effects

OK, so now I have a fallback for JavaScript that gives me this result (first picture is without JavaScript, second is with JavaScript):

As you can see, the only difference is that the disqus comment count can not be shown on the non-js version. So the result is that I have a consistent style for both js and non-js visitors, and I only initialize the headless part of the site when it is needed.

A fun (and useful) side-effect is the page speed. Measured in Google PageSpeed this now gives me a score of 99 (with the only suggestion to increase the cache lifetime of the google analytics js)

Is it really headless, then?

Yes and no. Given that you request my site with JavaScript enabled, the first page request is a regular Drupal page render. But after that, if you choose to go to the front page or any other articles, all content is fetched with AJAX and rendered client side.

Takeaways and lessons learned

I guess some of these are more obvious than others.

Do not punish your visitor for having JavaScript disabled. Make all pages available for all users. Mobile first is one thing, but you could also consider no-js first. Or both?

Do not punish your visitor for having JavaScript enabled. If you render the page based on a AJAX request, the time between initial page load and actual render time will be longer, and this is especially bad for mobile.

Subsequent pages are way faster to load with AJAX, both for mobile and desktop. You really don't need to download more than the content (that is, the text) of the page you are requesting, when the client already have the assets and wrapper content loaded in the browser.

Disclaimers

First: these techniques might not always be appropriate for everyone. You should obviously consider the use case before using a similar approach

If you, after reading this article, find yourself turning off JavaScript to see what the page looks like, then you might notice that there are no stylesheets any more. Let me just point out that this would not be the case if your _first_ page request were without JavaScript. By requesting and rendering the first page with JavaScript, your subsequent requests will say to my server that you have JavaScript enabled, and thus I also assume you have stored the css in localStorage (as the js does). Please see this article for more information

Let's just sum this up with this bad taste gif in the category "speed":

It has been a weekend in the spirit of headless Drupal, front-end optimizations and server side hacks. The result is I updated my blog to Drupal 8. Since you are reading this, it must mean it is live. First let's start with the cold facts (almost chronologically ordered by request lifetime):
Other front-end technologies used that does not directly relate to the request itself:

So, HHVM, huh?

Yeah, that's mostly just a novelty act. There is no real gain there. Quite the opposite, I have added some hacks to get around some limitations. HHVM does not work very well with logged in users right now, but works alright for serving anonymous content.

When I reload and look at the source code, there is no css loading. WAT?

Yeah, I am just assuming you remember the styles from last page load. Also, I have made it an image to have a 1 HTTP request CMS, right?

No, really. How does that work?

The real magic is happening by checking if you as a user already have downloaded my page earlier. If you have, I don't need to serve you css, as far as I am concerned. You should have saved that last time, so I just take care of that.

OK, so you use a cookie and save css in localstorage. Does that not screw with the varnish cache

Good question. I have some logic to internally rewrite the cached pages with a key to the same varnish hash. This way, all users trying to look at a css-less page with the css stored in localstorage will be served the same page, and php will not get touched.

What a great idea!

Really? Are you really sure you have thought of all the limitations? Because they are many. But seeing as this is my personal tech blog, and I like to experiment, it went live anyway.

Give us the code!

Sure. The theme is at github. The stupid cache module is at github. Please be aware that it is a very bad idea to use it if you have not read the code and understand what it does. And since I am feeling pretty bad ass right now, let's end with Clint Eastwood as an animated gif.

The site itself is a pretty simple site. It is about the mobile game Crash n Dash (check it out by the way). It contains a front page, where we also display a somewhat real time statistic of online users. And it has a high scores list. As you probably understand, this requires a custom module, so there was that. Also, we have a simple custom theme, built on the Foundation framework. So there was that.

This allowed me to learn more about making a module in Drupal 8, the guzzle library for making requests, some good ol twig for the theme. I'll cover my findings in separate posts later.

Second, a word about security. Since Drupal 8 is alpha still, who know what kind of bugs and potential security holes you can find in there, right? So I ended up disallowing login through regular channels. Since this particular server is behind varnish, disallowing on the default address was really easy, I just put this in my vcl file:

eiriksm/95c3c51148a9d0a56979

What this does, is effectively denying all logging in to your site on port 80, since no user ever will get a cookie. OK. Well that does not stop someone from logging in if they find the apache port, right? So I put this in my virtual host for the domain (in the directory directive):

eiriksm/c35f1209ee95964b5a99

Of course this does not cover all kinds of other tactics that some people might want to try, but at least we are limiting the possibilities to do harm. So next project on d8 is this blog. I mean, as developers, do we really have any excuse for not moving to Drupal 8 with these simple blog sites we put up? I am moving right after I get my feet wet with the migrate module in d8, as I have really enjoyed the projects I have used migrate for earlier.

Full disclosure: I am also the author behind the website mentioned, and the game referenced in that site. It's a free game, but I think it is still fair to mention. This blog post is a cross post from Crash n Dash tech blog and is in part written for shameless self-promotion :) Let's end the post with the animated gif that actually is on the front page of that shiny new Drupal 8 site.

So when I first realised that I was neglecting this blog, I somewhat found comfort in that at least it hadn't been a year, right? OK, so now it has almost been a year. Does that mean I have stopped finding solutions to my Drupal problems (as the "About me" block states)? Well, no. The biggest problem is remembering to blog about them, or finding the time. But finding the time is not a Drupal problem, and I most definitely have not found the solution to that problem. Anyway, I digress. Let's end the streak with a quick tip that I use all the time: Syncing live databases without having drush aliases.

If you use drush-aliases, you could just do drush sql-sync. But for me, even if I do, I still prefer this method, as I find it extremely easy to debug.

First I make sure I am in my Drupal root:

$ cd /path/to/drupal

Then I usually make sure I can bootstrap Drupal from drush:

$ drush st

If that output says my database is connected, then let's test the remote connection:

OK. So what does this do?

The first part says we want to ssh in to our remote live server. Simple enough. The next part in double quotes, tells our terminal to execute the command on the remote server. And the command is telling our remote server to dump the entire database to the screen. Then we pipe that output to the command "drush sql-cli" on our local machine, which basically says that we dump a mysql database into a mysql command line.

Troubleshooting:

If you get this error:

bash: drush: command not found

I guess you could get this error if you are for example on shared hosting, and use a local installation of drush in your home folder (or something). Simply replace the word drush with the full path to your drush command. For example:

So I just made a new theme for my blog, and as it turns out, I got one extra HTTP request. I started using Font Awesome with the theme and the file is too big to be embedded as a base64 encoded font. Darnit. So up to 2 internal HTTP requests.

But anyway, I made a new direction in how to cache my css, and the result is way better for mobile.

CSS is looped through (with the core function drupal_load_stylesheet() and then is processed the same way core processes css with aggregation. This way no image paths gets messed up (and the same goes for the path for Font Awesome). Since this is heavy to do on each pageload, I cache the result of this processing, and in the page files I just print out a javascript that you can see if you view the source. The same goes for javascript, a different loop, but the same result. This is for example my CSS function:

eiriksm/d0e8ff026205d6bdd2f0

And then to the javascript. It is dynamically printed on cache flush, so just view the source to see what it looks like. So what the script does is do a check if you have the css or js for my page cached in localStorage. It is stored inside an object with a key, so I can clear the cache and invalidate your localstorage cache. The key is generated for each flush cache action, with the hook_flush_caches() hook. If you don't have my assets, the CSS is loaded by AJAX, and I do a synchronous GET for the js (don't get me started on why it is synchronous). So how do I avoid showing the page without styling while waiting for the css? Simple. In the head of the page there is a style tag that says #page should be display none. When my css loads, it has rules saying i should show the #page. Done.

So then you think "Man, all this foreach stuff clutters up the template file". Nope. This is the actual lines from html.tpl.php:

eiriksm/cd08701e967ac298926f

"And what's with the non-jquery javascript syntax?" you may ask. Naturally this is because the same code also loads my javascripts, and so jquery is not included at that point in the code. And also, writing native javascript is fun.

So this makes for 4 internal HTTP requests on first pageload, and 2 after that.

Expanding this, I also used the same trick on the $page variable, so the entire page was loaded with ajax (or localstorage, if available). So I actually got a full-featured, full styled CMS in a couple of KB and 2 HTTP requests. This has some side effects, obviously, so I skipped that one on the live page.

The module client_cache will probably be made available one of these days, if I get around to it. But it's probably not a good idea for most sites!

Where I work, we have some bigger clients where we have some advanced integration systems with their accounting systems, stock control, exports to old windows systems, and the list goes on. So these things are not something we want to (or in many cases can) run on the dev version of the site.

To keep things still in version control, and not having to turn things off when dumping in a fresh database copy, we use the $conf variables in settings.php

The file settings.php is not checked in in git, and this is also where we automatically turn off js and css aggregation, by for example setting

$conf['preprocess_css'] = 0;

And some other stuff. But we also add our own variable.

$conf['environment'] = 'development';

This way we can do a check in the top of all integration functions:

eiriksm/f8ee6fd52e6e1a84c3e0

So keeping this in our production code, ensures that integration functions are not run when developing on a dev site. Also, a lot of cron functions are moved to a separate drush command, and run by the server cron instead of implementing hook_cron(). This will then never be run on the dev site.

I am sure everyone has their own way of doing similar stuff, so please feel free to share your similar tricks, or improvements, in the comments.

A while back I wrote a blog post about loading my homepage in 1 HTTP request. As I said back then, this was only an experiment, but as promised I have done some testing to see if this was any use at all.

A short repeating for those who does not want to read the whole (previous) article: The front page of this website is loaded in 1 internal HTTP request. All images, CSS, media queries, javascript are in one flat HTML file, served directly from the server cache. So, then you open your inspector and find out google makes 2 requests, and I have the twitter feed loaded async. OK. But 1 internal.

First, let me explain how I did the tests. I did it locally (so network lag should not be an issue), and with phantom js, a headless webkit browser. So this should mean that the page is fully loaded in a browser at the times presented, and that it's not just the request that is done. I also did 1000 runs on each setting, just to have a lot of numbers. I never tried it going back to normal images instead of base64 encoded though. Should probably do that too at some point. Anyway, here are the results:

First up: No caching, no aggregation, just how the frontpage of this blog is in this theme: An average of 1525 ms. Not very impressive. But the optimization effort is not very impressive either.

Second: Cache pages for anonymous users. Because the test requests are not logged in, and none of my visitors are logged in either. 131ms. That is improvement. This is localhost of course, but that is also the point.

Then: Turn on the boost module. Serving plain HTML pages with a couple of aggregated CSS and JS files: 85ms. Pretty darn fast. Let's try to get rid of the remaining HTTP requests.

The current version, how I now serve my homepage: 1 HTTP requests, plain HTML from boost. 92ms. Darn, that is actually slower again. Browser cache definitely plays a role for rendering fast. Can I do some more tweaking to this?

On to the last test: Get rid of all the whitespace I can find in CSS and JS, and try again in 1 HTTP request. Down to 89ms!

Ok, conclusions: If you have a feeling your visitors will visit more than one page on your site, you have no need to go all the way to 1 HTTP request. And a couple of disclaimers: This will probably vary depending on connections, how fast your computer is (for browser cache), and probably the results will vary from where in the world you are accessing my site. But with some variables out of the way, it seems that eliminating all requests is no performace gain compared to lowering the number of HTTP requests and minifying. Also, is it not interesting that getting rid of whitespace saved more than 3% in load time? Another reason why it's probably best to minify, gzip and aggregate. And also, other caching methods than boost are probably more efficient, but I keep this blog at a cheap host, so Varnish, memcache and so on is not an option. And frankly, this post can not cover both front and back end performance, right? For my next test: does anyone have any good tools on doing the same tests for a “headless mobile device”?

Well, not really. I mean, you can create webforms programatically pretty easily. This tutorial will show you how easy. Or you could just use the rules module if you just want the node created. But also, I want to share the things that got me scratching my head, like creating the description of each webform component programatically.

I am sure you have had this scenario as well. You enabled the webform module and tought the site admins how to create them and add components. But each time they want to create a new webform, they send an email to you asking for help, and you practically end up creating the nodes for them, since it has already been 3 weeks since last time you told them how it was done. Ok, so let's make it dead simple for them. Create a node with a title, and put all components in the body - one per line. Ah, no more emails, and code nerds can go back to their terminal and away from clicking with a mouse.

Use case:

So I have this site where you can sign up for parts of an order. Like a co-op. Let's say we are ordering kittens. So an admin puts out the news that he is shipping out a new order, and these are the kittens that are up for grabs.

Grey kitten

Black kitten

Killer kitten

Clown kitten

We want to create a webform to find out how many kittens of each type the users of the site would want in this order.

So instead of me telling the admins to create a webform and add numeric components for each kitten, I just tell them to go ahead and click the big button that says “create order” (visible for admins only). In the title field, they give the order a name (like the name of the supplier) and in the body field they list all kittens available, one kitten per line. Optionally they can also add an URL to an animated gif of the kitten, if included on the same line in a parentheses. So much easier for them, so they can concentrate on kitten distribution instead. I also added a description field so they can use that for a closer description of the order. Ok, so this is the code (Drupal 7 obviously): eiriksm/c9e779b999b3da82158e

Ok. This is all pretty straight forward, eh? So the thing that had me going nuts a while, was adding the links as the description of each component. Looking at a webform node object, one would think it would go in the “extra” array of each component. This is how a webform node with a component with a description looks like:

But after repeating different takes on adding it to the array (even trying to modify $n after it is saved, and saving it again), nothing seemed to do the trick. Luckily, webform has its own hooks. Enter hook_webform_component_presave(). Or “Modify a Webform component before it is saved to the database.” as it says in the documentation. Perfect! Let's go ahead and add a link to the animated gif as a description:

eiriksm/76861b8115e9be0761a7

Bottom line? Webform is awesome for these kind of user submissions, and now with programatically creating them just like we want, things just got real easy for the non-techies on this site. Now let's see how many killer kittens I can afford. Sorry for the large GIF this time, but I could just not help myself. Killer kitten to follow (5.7MB):

Today I wanted to share the little shell script I use for setting up local development sites in a one-liner in the terminal. Be advised that I don't usually write shell scripts, so if you are a pro, and I have made some obvious mistakes, please feel free to give improvements in the comment section. Also, while i have been writing this post, i noticed that Klausi also has a kick ass tutorial and shell script on his blog. Be sure to read that as well, since the article is much more thorough than mine. But since my script is for those even more lazy, I decided to post it anyway.

The idea I had was pretty simple. I constantly have the need to set up a fresh, working copy of drupal in various versions with various modules, i.e for bug testing, prototyping, or just testing a patch on a fresh install. While drush make and installation profiles are both awesome tools for a lot of things, I wanted to install Drupal without making make files and writing .info files, and at the same time generate the virtual host and edit my hosts file. And why not also download and enable the modules i need. For a while I used just

$ drush si

(site-install) on a specified directory for flushing and starting over, but part since I have little experience in writing shell scripts (I said that, right?), I thought, what the hey, let's give that a go. Fun to learn new stuff. On my computer the script is called vhost, but since this is not a describing name, and the script is all about being lazy, let's call it lazy-d for the rest of the article (lazy-d for lazy drupal. Also, it is kind of catchy).

This will download drupal 7 to example.dev in your www directory, create a database and user called example.dev, install Drupal with the site name example.dev, edit your hosts file so you can navigate to example.dev in your webbrowser, and download and enable views, ctools admin_menu and coffee modules.

Running the script like this:

$ ./lazy-d example.dev

will do the same, but with no modules (so Drupal 7 is default, as you might understand).

You can also use it to download Drupal 8, but the drush site install does not work at the moment (it used to when I wrote the script a couple of weeks back though). Drupal 6 is fully supported.

The script has been tested on ubuntu 12.04 and mac os x 10.7. On mac you may have to do some tweaking (depending on whether you have MAMP or XAMPP or whatever) to the mysql variable at the top. And probably do something else with the a2ensite and apache2ctl restart as well (again, depending on your setup).

Disclaimer: This is an experiment, and would probably not apply to most websites (if any). My mission: Load my homepage in 1 HTTP request without compromising layout or functionality. Just to be clear, the motivation is not to gain any performance from this, just to see if I can do it.

So, back to the point. Drupal has a lot of css, js and image files by default. To load them all you have to ask the server to serve them to you, simple as that. So if you have to ask less times, the page would load faster. In theory. This blog has over 40 requests on the front-page, out of the box. Luckily, Drupal has a number of tools you can use to minimize requests, and among them is the optimization you can find in core. You can find it under the “performance” settings (depending on what version you are on, this is located under “configuration->development” or “site configuration”). Cool. But this still leaves me at more than 10 HTTP requests. I want less! So what next.

So the next step was to avoid loading too many images, so I decided to start with base64 encoding my burning logo (which actually consists of one file pr. letter).

To base64 encode files is extremely simple. What I usually do is use a small php file containing only this:

The next step was to embed the fonts I use, without querying google fonts (hope they don’t read this, as I am not sure if I am allowed to embed them like this, does anyone know?). Same step here with the base64 encoding, and the styles went in to the head tag something like this:

The two next steps is kind of site-specific, but basically what I tried to do, was iterate over the scripts and stylesheets, and print the contents of the files instead of embedding them. So in a template file I put something like this:

eiriksm/1e8ff37c8c0e8641f1e7

Cool. We have no requests on css! And now to the JavaScript.

This was tricky. I have a combination of standard, custom and a couple of compatibility scripts on my site. I had to throw out a couple of them, and do some reordering of the scripts, and then iterate and print them out in a similar matter as the stylesheets.

Since you probably are looking at this page in a full node view (or you would not be able to read this), you may have looked in the inspector and seen that I have well over 30 HTTP requests. Ugh, those external services, eh? The main reason for this is actually the addthis stuff I have at the bottom. So I have over 30 requests extra just to look modern. BUT: If you navigate to the front page I have tossed out all sharing options and actually clock in at an awesome 4 requests. Two of them is google analytics. The other one is there for one and only reason: I use the boost module to serve static pages, so if I want to display that talking burning bird without using cached data, I just have to burden you with a lookup to search.twitter.com. I am sorry to burden you, visitor!

Mission is as much as it can be complete, and I am down to 1 internal HTTP request. Let’s celebrate with an animated gif, folks!

A lot of different people has started experimenting with Phonegap and Drupal. You have Jeff Linwood and his Drupal plugin for Phonegap for iOS, and I just discovered Drupalgap as I was planning this post the last weeks, which does more or less (actually it does more, but not all) some of the same things I will try to do in this post.

If you want to get up and running real quick, Drupalgap seems great. If you want to learn the code behind it, and extend it yourself (this was my motivation), keep reading.

This post is dedicated to writing phonegap apps to post images to your nodes, right from your javascript. You just make an HTML file, include some javascript magic, and hey, we just posted images to our site using the camera on the phone.

As I have written a couple of posts now about posting content to your Drupal site from an app with the services module, I thought I should probably do a little more tutorial like post. I have started moving a lot of functions into a javascript file I call drupal.js that have several different methods of communicating with your Drupal services endpoint. Today I want to just scratch the surface, and provide a working example app.

To get this working on your site, follow these steps:

Use Drupal 7 and services module v3.

Enable the modules Services and REST server.

Add a new services endpoint (under structure->services)

Enter a path to the endpoint and select REST server. Also use session authentication if you want to log in.

Add json as response formatter

Enable the node, file and user resources.

Be sure to have a content type enabled, and preferably with a body field and image field (for this example)

Download the app and test it out!

The app can be found here (android, webOS, blackberry and symbian only. iOS will not be released, as this is just an example app that I don’t want to get into the app store).

Since this is a hobby-project at the moment, I don’t have very much time to do development on the js every day, but I will post back with more info on that in a later post. The thought is that you can run these functions from your html app, so you don’t have to write all the ajax calls yourself.

For example for nodes, you can do something like this:

eiriksm/59371b0bb8e16788dfdd

The last parameter is the success function (optional). There is also a standard success function in drupal.js, but it is just an alert to tell you that that it was a success.

So let's go into a more detailed tutorial. To post pictures, you have to have a base64 encoded image. This is actually what you get back from Phonegap when accessing the camera, but just to watch the basics of posting nodes with images, try typing this into a click function for something (remember to be able to post cross domain javascipt and include drupal.js. And also, if you just include this code (and not logging in), remember to also be able to post as anonymous. Oh, and obviously, include jQuery)

eiriksm/98e52c2ec1fc50e2ea56

What this does, is make a file called "fancy.gif", execute the success function postwith, which posts a node with the file id recieved from posting the image. Or more specifically: This would make a node with that pretty burning bird I have on my site. The crazy, long string there, is the base64 version of it. So let's take a look at how to do the same with a custom picture. I'll do something like this:

eiriksm/c24de5ac84dd4edfd1a7

This shoots the file all the way to your Drupal site with the function drupalServicesPostFile (declared in drupal.js). Then it executes a success function called postwith, which is the same as above. drupalServicesPostNode does the rest. And then we go to our site and look at our fancy new picture node, created right from our phone's camera.

As I have briefly mentioned in an earlier post, you can easily post to your Drupal site from a phonegap app. The reason for this is that the cross domain browser restriction does not apply for the apps you are building because it is basically a local file (the file:// protocol). While this is really swell, you don't actually develop on your phone, so it would be practical to try out all this in your browser before you deploy to your phone.

If you try it in your browser it will most likely give you "Origin null is not allowed by Access-Control-Allow-Origin." or just a silent fail if you are using jQuery. For example your success function does not get called in your $.ajax. So how do you do cross-domain javascript on your computer?

There might be more methods, but the far easiest I have found I to start chrome with the following parameter: "--disable-web-security". On mac, the parameter is "--args --disable-web-security". If you are using windows, you can just make a shortcut to chrome, edit the "target" in the shortcut to include --disable-web-security in the end, for example "C:\Users\myuser\AppData\Local\Google\Chrome\Application\chrome.exe --disable-web-security". Remember to name it differently so you don't accidently surf the web with cross-domain disabled.

Now that you have this in place, go ahead and make awesome apps to post to your Drupal site. Or just wait around for my boilerplate template coming soon.

Cloning a production site to your local development environment is super easy. Often you have it in git, or maybe even as a make file. Anyway, you just grab the code and restore the site from a backup_migrate dump. But maybe you have a KING SIZE files directory that you don’t want (or have the time) to download. Enter Apache rewrite magic!

You can just redirect your local files directory to the external files directory, so that localhost.dev/sites/default/files point to example.com/sites/default/files. To do this, you need apache modules proxy and proxy_http. The rest is just a couple of lines in the virtual host config. Example:

Today I had a bunch of old nodes from a migrated d6 site that did not have comments enabled, although the content type had comments enabled. This could also be the case if you have created a bunch of nodes, and all of sudden change your mind and want to enable comments on them anyway. One could always edit each one of them and turn on the comments, but that just can't be the only way, I thought, and did some research.

At first I didn't find any out-of-the-box way to enable comments for multiple nodes. Programatically enabling comments for all nodes by looping through nodes and enabling comments in a simple script could always be done, but my solution was even simpler. I used the module Views bulk operations. The easiest way was if VBO had an action already created for enabling comments, but as that wasn’t the case, it is really not a long snippet to execute on each row. Here is the steps I used to achieve what I wanted.

1. Downloaded and enabled views bulk operations
2. Created a view displaying all nodes of the content type I wanted to enable comments on
3. Added a VBO field
4. Enabled the Execute PHP action for this field, and also the “select all nodes” checkbox
5. Added a page display
6. Viewed the view page, selected all nodes and executed the PHP action
7. Entered the following snippet as action

So, today i decided to have a go at the services module, to make an app post nodes to my site. With services module enabled you can do a number of things from non-drupal sites, for example you can post nodes through your phonegap app by javascript. And that is exactly what i wanted to do today.

Ok here is the javascript code, the services part of it is pretty straight forward to set up anyway, but finding how to create nodes with json and javascript was not that easy. You can probably figure out how to log in based on this, and this snippet posts to a content type with a geofield, and posts the users location onto my server, along with some other values.

Remember, in phonegap, this should be fired when phonegap has loaded! Also, this entire thing is wrapped in the geolocation success function, just for shorter code. You probably want to have this as a named function, as well as an error function.

So you have a feed that you aggregate through feeds, and all new items should have the same value in a given field. For example if you have feeds from 4 different webshops, you may want to add a fixed value of the webshop description that is not present in the feed for each different feed. So how do you make a feeds feed give the same value for each feed item? If you use feeds x path parser (as I often do), how do you fix a value for each item in the feed? Or more specifically, how do you insert a fixed value with xpath?

Given that you already are familiar with xpath and feeds, the best answer I have found is an xpath function called concat. Concat combines two or more values into one, I.e concat(“animated “,”gifs”,” or GTFO”) will give the output “animated gifs or GTFO”.

As you already may have figured out, this can be used both to fix a string value, or as a prefix to your feed item value in feeds. I.e concat(“my guid in this feed item is “,”$guid”) will give something like “my guid in this feed item is 1234guid_uniqueID”.

Note that concat needs at least two arguments. So if your fixed value is supposed to be “animated_gifs” with no spaces, use concat(“animated”,”_gifs”) or similar.

Again, this worked for me (as of dev version august 2011 of feeds xpath parser – not alpha 4). If you have a simpler solution, please share it in the comment section.

Let's make a theoretical example: say you want a fixed field with a file reference, that always renders in an <img> tag. concat(“blues04”,”.gif”) would give this output: