Drupal Tome is a static site generator distribution of Drupal 8. It provides mechanisms for taking an entire Drupal site and exporting all the content to HTML for direct service. As part of a recent competition at SCDUG to come up with the cheapest possible Drupal 8 hosting, I decided to do a proof-of-concept level implementation of Drupal 8 with Docksal for local content editing, and Netlify for hosting (total cost was just the domain registration).

The Tome project has directions for setup with Docker, and for setup with Netlify, but they don’t quite line up with each other (I followed the docker instructions, then the Netlify set, but had to chart my own course to get the site from the first project linked to the repo in the second), and since I’m getting used to using Docksal when I had to fall back and do a bit of it myself I realized it was almost painfully easy to setup.

The first step was to go to the Tome documentation for Netlify and setup an account, and site from the template. There is a button in those directions to trigger the Netlify setup, I’ve added one here as well (but if this one fails, check to see if they updated theirs):

Login with Github or similar service, and let it create a repo for your project.

Follow Netlify’s directions for setting up DNS so you can have the domain you want, and HTTPS (through Let’s Encrypt). It took it a couple hours to get that detail to run right, but it eventually worked. For this project I chose a subdomain of my main blog domain: tome-netlify.spinningcode.org

Next go to Github (or whatever service you used) and clone the repository to your local machine. There is a generated README on that project, but the directions aren’t 100% correct if you aren’t cloning onto a machine with a working PHP environment. This is when I switched over to docksal, and ran the following series of commands:

fin init
fin composer install
fin drush tome:install
fin drush uli

Then log into your local site using the domain from docksal and the link from drush, and add some content.

Next we export the content from Drupal to send over to Netlify for deployment.

If you look at the site a few minutes later the new content should be posted.

This is all well and good if I want to use the version of the site generated for the Netlify example, but I wanted to make sure I could do something more interesting. These days Drupal ships with an install profile called Unami that provides a more robust sample site than the more traditional Standard install.

So now let’s try to get Unami onto this site. Go back to the terminal and have Tome reset everything (it’ll warn you that you are about to nuke everything):

fin drush tome:init

…select Unami when it asks for a profile…and wait cause this takes a while…

That really was all that was involved for a simple site, you can see my repository on Github if you want to see all of what was generated along the way.

The whole process is pretty straight forward, but there are a few things that it helps to understand.

First, Netlify is actually regenerating the markup on their servers with this approach. The Drupal nodes, and other entities, are saved as JSON and then imported during the build. This makes the process reliable, but slow. Unami takes several minutes to deploy since Netlify is installing and configuring Drupal, loading the content, and generating the output. The build command provided in that template is clear enough to follow if you are familiar with composer projects:

One upside of this, is that you can use a totally unrelated domain for your local testing and have it adjust correctly to the production domain. When you are using Netlify’s branching workflow for managing dev, test, and production it also protects your work that way.

My directions above load a standard docksal container because that’s quick and easy, which includes MySQL, but Tome falls back to using a Sqlite database since you can be more confident it is there. Again this is reliable but slow. If I were going to do this on a more complete project I’d want a smaller Docksal setup or to switch to using MySQL locally.

A workflow based on this approach might also struggle with concurrent edits or complex configuration of large sites. It would probably make more sense to have the content created on a hidden, but traditional, server and then run through a different workflow. But for someone working on a series small sites that are rarely updated, a totally temporary instance of the site that can be rapidly deployed to a device, have content updated, push out to production, and then deleted locally until needed again.

The final detail to note is that there is no support for forms built into this solution. Netlify has support for that, and Tome has a module that claim to connect to that service but I wasn’t able to quickly determine how to get it connected. I am confident there are solves to this problem, but it is something that would take a little additional work.

Pantheon is an excellent hosting service for both Drupal and WordPress sites. But to make their platform work and scale well they have set a number of limits built into the platform, these include process time limits and memory limits that are large enough for the vast majority of projects, but from time to time run you into trouble on large jobs.

For data loading and updates their official answer is typically to copy the database to another server, run your job there, and copy the database back onto their server. That’s fine if you can afford to freeze updates to your production site, setup a process to mirror changes into your temporary copy, or some other project overhead that can be limiting and challenging. But sometimes that’s not an option, or the data load takes too long for that to be practical on a regular basis.

I recently needed to do a very large import for records into a Drupal database and so started to play around with solutions that would allow me to ignore those time limits. We were looking at needing to do about 50 million data writes and the running time was initially over a week to complete the job.

Since Drupal’s batch system was created to solve this exact problem it seemed like a good place to start. For this solution you need a file you can load and parse in segments, like a CSV file, which you can read one line at a time. It does not have to represent the final state, you can use this to actually load data if the process is quick, or you can serialize each record into a table or a queue job to actually process later.

One quick note about the code samples, I wrote these based on the service-based approach outlined in my post about batch services and the batch service module I discussed there. It could be adapted to a more traditional batch job, but I like the clarity the wrapper provides for breaking this back down for discussion.

The general concept here is that we upload the file and then progressively process it from within a batch job. The code samples below provide two classes to achieve this, first is a form that provides a managed file field which create a file entity that can be reliably passed to the batch processor. From there the batch service takes over and using a bit of basic PHP file handling to load the file into a database table. If you need to do more than load the data into the database directly (say create complex entities or other tasks) you can set up a second phase to run through the values to do that heavier lifting.

The managed file form element automagically gives you a file entity, and the value in the form state is the id of that entity. This file will be temporary and have no references once the process is complete and so depending on your site setup the file will eventually be purged. Which all means we can pass all the values straight through to our batch processor:

When the data file is small enough, a few thousand rows at most, you can load them all right away without the need of a batch job. But that runs into both time and memory concerns and the whole point of this is to avoid those. With this approach we can ignore those and we’re only limited by Pantheon’s upload file size. If they file size is too large you can upload the file via sftp and read directly from there, so while this is an easy way to load the file you have other options.

As we setup the file for processing in the batch job, we really need the file path not the ID. The main reason to use the managed file is they can reliably get the file path on a Pantheon server without us really needing to know anything about where they have things stashed. Since we’re about to use generic PHP functions for file processing we need to know that path reliably:

Now we have a file and since it’s a csv we can load a few rows at time, process them, and then start again.

Our batch processing function needs to track two things in addition to the file: the header values and the current file position. So in the first pass we initialize the position to zero and then load the first row as the header. For every pass after that we need to find point we left off. For this we use generic PHP files for loading and seeking the current location:

The example code just dumps this all into a database table. This can be useful as a raw data loader if you need to add a large data set to an existing site that’s used for reference data or something similar. It can also be used as the base to create more complex objects. The example code includes comments about generating a queue worker that could then run over time on cron or as another batch job; the Queue UI module provides a simple interface to run those on a batch job.

I’ve run this process for several hours at a stretch. Pantheon does have issues with systems errors if left to run a batch job for extreme runs (I ran into problems on some runs after 6-8 hours of run time), so a prep into the database followed by running on queue or something else easier to restart has been more reliable.

I recently had reason to switch over to using Docksal for a project, and on the whole I really like it as a good easy solution for getting a project specific Drupal dev environment up and running quickly. But like many dev tools the docs I found didn’t quite cover what I wanted because they made a bunch of assumptions.

Most assumed either I was starting a generic project or that I was starting a Pantheon specific project – and that I already had Docksal experience. In my case I was looking for a quick emergency replacement environment for a long-running Pantheon project.

Fairly recently Docksal added support for a project init command that helps setup for Acquia, Pantheon, and Pantheon.sh, but pull init isn’t really well documented and requires a few preconditions.

Since I had to run a dozen Google searches, and ask several friends for help, to make it work I figured I’d write it up.

Install Docksal

First follow the basic Docksal installation instructions for your host operating system. Once that completes, if you are using Linux as the host OS log out and log back in (it just added your user to a group and you need that access to start up docker).

Add Pantheon Machine Token

Next you need to have a Pantheon machine token so that terminus can run within the new container you’re about to create. If you don’t have one already follow Pantheon’s instructions to create one and save if someplace safe (like your password manager).

Once you have a machine token you need to tell Docksal about it. There are instructions for that (but they aren’t in the instructions for setting up Docksal with pull init) basically you add the key to your docksal.env file:

SECRET_TERMINUS_TOKEN="HASH_VALUE_PROVIDED_BY_PANTHEON_HERE"

Also if you are using Linux you should note that those instructions linked above say the file goes in $HOME/docksal/docksal.env, but you really want $HOME/.docksal/docksal.env (note the dot in front of docksal to hide the directory).

Setup SSH Key

With the machine token in place you are almost ready to run the setup command, just one more precondition. If you haven’t been using Docker or Docksal they don’t know about your SSH key yet, and pull init assumes it’s around. So you need to tell Docksal to load it but running:fin ssh-key add

If the whole setup is new, you may also need to create your key and add it to Pantheon. Once you have done that, if you are using a default SSH key name and location it should pick it up automatically (I have not tried this yet on Windows so mileage there may vary – if you know the answer please leave me a comment). It also is a good idea to make sure the key itself is working right but getting the git clone command from your Pantheon dashboard and trying a manual clone on the command line (delete once it’s done, this is just to prove you can get through).

Run Pull Init

Docksal will now setup the site, maybe ask you a couple questions, and clone the repo. It will leave a couple things out you may need: database setup, and .htaccess.

Add .htaccess as needed

Pantheon uses nginx. Docksal’s formula uses Apache. If you don’t keep a .htaccess file in your project (and while there is not reason not to, some Pantheon setups don’t keep anything extra stuff around) you need to put it back. If you don’t have a copy handy, copy and paste the content from the Drupal project repo: https://git.drupalcode.org/project/drupal/blob/8.8.x/.htaccess

Finally, you need to tell Drupal where to find the Docksal copy of the database. For that you need a settings.local.php file. Your project likely has a default version of this, which may contain things you may or may not want so adjust as needed. Docksal creates a default database (named default) and provides a user named…“user”, which has a password of “user”. The host’s name is ‘db’. So into your settings.local.php file you need to include database settings at the very least:

With the database now fully linked up to Drupal, you can now ask Docksal to pull down a copy of the database and a copy of the site files:

fin pull db

fin pull files

In the future you can also pull down code changes:

fin pull code

Bonus points: do this on a server.

On occasion it’s useful to have all this setup on a remote server not just a local machine. There are a few more steps to go to do that safely.

First you may want to enable Basic HTTP Auth just to keep away from the prying eyes of Googlebot and friends. There are directions for that step (you’ll want the Apache instructions). Next you need to make sure that Docksal is actually listing to the host’s requests and that they are forwarded into the containers. Lots of blog posts say DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin reset proxy. But it turns out that fin reset proxy has been removed, instead you want:

DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin system reset.

Next you need to add the vhost to the docksal.env file we were working with earlier:

Now you need to add either a DNS entry someplace, or update your machine’s /etc/hosts file to look in the right place (the public IP address of the host machine).

Anything I missed?

If you think I missed anything feel free to let know. Particularly Windows users feel free to let me know changes related to doing things there. I’ll try to work those in if I don’t get to figuring that out on my own in the near future.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

This week marks the 20th anniversary of the Hague Appeal for Peace and everything that happened (and didn’t) as part of that event and since, I decided to post some of my pictures from that adventure.

In my post on being an activist back in March I mentioned attending the Hague Appeal and the peace walk that followed. I was part of a delegation from Philadelphia Yearly Meeting; a group mostly made up of college students and a few older high school students, along with a few adults who handled the logistics and kept us on track more or less.

I have ten boxes of slides, and a few years ago I scanned them as best I could but frankly the scans aren’t great. The slides, which were more than ten years old at the time, had already started to fade and color shift as a result of their age. I did some color correction as I prepped them for this, but I also like the feel of some being somewhat faded and shifted with time. There are shared here full frame, and some are roughly cropped, but none carefully realigned. Since they are now pushing twenty I decided that I wanted to leave them all at or near full size and try to capture a bit of the way I saw the world then, and less of how I would edit it now. I like the rough visual feel they have as part of reflected on partially faded memories.

My guide and canvas bag from Hague Appeal for Peace. Wayne features in lots of these pictures, and still has a spot on my desk.

Kofi Annan gave one of the key note speeches (closing I think).

Aung San Suu Kyi was still under house arrest, back before she left the Rohingya to die, and gave a recorded speech that was smuggled out for us to see.

This was the first time I saw Desmond Tutu speak in person. He was one of the bigger draws for his sessions.

I also learned that Tutu has a great sense of humor, making fun of all the people taking his picture.

HAP happened during the US intervention into the was in the Balkins, and some attendees became concerned that the Russia might join the conflict with Nuclear weapons. So a protest sign was created and a march planned in a bar one night.

Lots of people signed the banner when it was set out near the entrance.

People signed from their own perspectives.

The banner was carried to the Peace Palace, home of the international court of justice.

At the Peace Palace survivors of the Hiroshima bombing were invited to speak to share their stories.

I got to wander by the Peace Palace a couple times to took a feature pictures.

There was a group of monks who walked for peace on a daily basis. They eventually joined the walk to Brussels.

There was a group, from East Timor I think, who came on bikes.

There were lots of presentations by various groups, often with cultural song and dance included.

I don’t take a lot of self-portraits, but I like this one taken near where our group stayed on the coast of the North Sea

We stayed on the coast of the North Sea and had great sun sets. I managed to get this picture there one evening.

Some of the folks from the PYM group sitting around chatting.

The walk to the Hague started the day after the main conference ended. This banner was carried in front for most of the distance.

We walked in all kinds of settings from highways, to bridges, to fields (just missed tulip season so didn’t see many in bloom).

This was actually taken the first evening, I’ve always liked this image of the bike by the lake.

As we progressed we’d stop and hold public presentations and random die ins.

One of the survivors of the blast in Hiroshima came along with his daughter and shared his story of survival during several of the stops.

This young man from India knew more about international politics by 18 than I’m likely to really ever understand. It often was carrying this UN flag.

This little guy also made the walk with us.

Meals were provided a group that provided a small mobile kitchen for such events across western Europe. They would drive ahead of us, setup with a lunch of sandwiches and soup, and then meet us for dinner at the main camp site.

Wayne posed for a couple pictures with our meals. The bread, and much of the rest was donated or bought locally in towns.

This little girl and her parents where along for the journey as well. They would literally clown around with make up, costumes, stiltes, and juggling.

I was never clear on the idea of the juggling and other acts, but it was a nice distraction – which may have been the point.

Group meetings were a regular feature of our walk.

Wayne enjoying some of the treats I picked up along the way.

We camped each night in a town park or similar arrangement. Tents were moved by truck and bus between each stop so we only had to carry day packs.

Several people from PYM started the walk. Some had to leave before the end, so we took this picture before the first of us had to depart.

Much of the walk was low key along small roads through various towns and cities.

Even Wayne needed a break at times. It was a lot of walking.

This is one of two guys I talked to a lot along the way and whose names I cannot remember. I think they were both IT techs or programmers.

This is the second of the two. I’m pretty sure of the general advice I’ve followed in my career was based on things they told me, but the details are largely lost.

In Brussels we were protesting in front of Nato headquarters, and we took some time to practice basic passive resistance strategies for those planning to get arrested.

Practice also involved mock police trying to antagonize people.

A group of Indian farmers joined for the last few days. It had taken a while to get their visa’s cleared. I honestly have no idea what they thought they were joining, but that were good fun to have along.

The final approach to Nato headquarters involved lot of excitement and nervousness.

These three kids were from a community in Columbia, and were the only minors at the final protest. My job would becoming keeping them out of trouble and getting them back to our lodgings safely. The boy in the red shirt elected to ignore me and got arrested.

There was a heavy police presence, and concertina wire strung to keep clear boundaries. But early on things were relaxed. These guys were taking our pictures so I took theirs.

This got decided to get naked for some reason. Didn’t really make sense to me at the time, or now, but he had a nice time (and was charged with public nudity).

This was also the first time I ran into the Raelians. Even a group of anti-nuclear marchers thought them a bit odd, but we welcomed their support (this was before they started to work on human cloning).

In Europe they don’t have to associations we do with water cannon so they had two deployed, and were used to move people that started to cut and cross the wire.

This is just after the best picture I didn’t take. Bill, a Friend from Brooklyn Meeting, he stood between the streams of the cannon shaking his fist as they tried to knock him down. He withstood their pressure but I missed the image.

They tried for a while to dislodge a group that grew the more they tried to move it. They positioned the second truck to create a crossfire, but then changed approaches.

After an initial round of tension, water cannon bursts, and a few arrests, we all settled in for lunch in the shade. We debated politics with these offices and gave them some of our watermelon. Everyone seemed to get along most of the time, the police and protesters all encouraged calm from one another.

When the mounted police started down one side, it didn’t take a great military mind to spot a flanking maneuver, so I started to move myself and my charges away from the action.

I nearly did get swept up but this line, but got through with the help of the event organizer who used his connections to make sure those who didn’t want to get arrested could leave.

That trip was an important few weeks in my life, and I’ve been having a great time going back through the pictures. If you were with me on that trip and wonder if I have other pictures of you kicking around I might so send me a note and I’ll try to see what’s around and sent some your way.

For a project I’ve been working on recently we had need to create a module that provides secure redirects from a Drupal site to FormAssembly. Overall the module does a number of things, but handling dynamic parameter signing was the thing that took the most time.

FormAssembly provides a variety of great features for creating flexible forms that integrate with Salesforce. One of the more popular features is its ability to pull data from Salesforce to prefill fields on a form. But the downside is that it is easy to create forms that leak information from Salesforce into those forms, and create privacy risks.

To address this, FormAssembly allows 3rd party tools to securely sign URLs that contain parameters (often Salesforce IDs) that could be used to extract information through an iteration attack and other basic approaches. This secure signing process can be done statically but for most interesting projects you want to sign the URLs dynamically. The dynamic signing process allows you alter the parameters on the fly and set an expiration date to limit the value of a stolen link. Our project required this approach.

But the dynamic signing process has a couple sharp corners. First, it’s rarely done outside of Salesforce so there aren’t a lot of code samples around, and none that I could find in PHP. Second, FormAssembly is very open and honest about the fact that they do not provide support on this feature. So I had to create my own process from the documentation they provide. The docs are good, but very Salesforce centric, with all code samples in APEX.

The process involves preparing the data for signature, generating a HMAC-SHA256 with a form specific pre-shared key (in binary mode), converted to a string using base64, and finally URL encode the result.

Their convention for preparing the data is straightforward. You format all parameters as just their key and value strung together: key1Value1key2Value2

The interesting part is the actual HMAC-SHA256, which needs to be generated in binary mode, something that is often the default mode but not in PHP (in fact most PHP devs I’ve talked don’t realize the last parameter to hash_hmac() is useful, if you are doing this in another language check out this collection of examples).

From there you encode the output in base-64 (which results in a 44 character hash), and URL encode the hash to make sure it’s URL safe, and you’ll end up a few characters longer.

Finally you add you hash to the query string, and you’re ready to go.

To help anyone else who needs to do this, I generalized this part of the solution and I created and tossed it into Gist.

When someone tries to insult you with what you often see as a compliment it is worth stopping to reflect. Am I an activist? If I’m not, should I be?

On Valentines Day this year my wife and I spent a few hours at DSS for a meeting related to some of the children we work with in the Guardian ad Litem program. In the course of a rather tense conversation a caseworker tossed out “Well, I am not an activist.” with the clear intention of implying that I am, and that activists are a problem.

It is the first time I can recall being called an Activist as an insult, and I’ve been a bit hung up on the topic ever since.

At AFSC I had colleagues who would argue if you haven’t been arrested for a cause you aren’t really an activist. We had critics who argued that because AFSC staff were paid they couldn’t be true activists. I didn’t then, nor now, fully agree with those arguments, but my point is that when someone calls me an “activist” those are the comparisons they are drawing in my mind.

My credentials as an activist on that scale are weak at best. The first time I spent a lot of time with activists was in 1999 during the Hague Appeal for Peace and a peace walk that followed. The group walked from the Peace Palace – home of the international criminal court – in The Hague, Netherlands to Nato Headquarters in Brussels, Belgium. That picture of the water cannon firing on a crowd at the top of the page is mine, although I wasn’t willing to risk arrest that day (my sister was getting married the next week and my mother would have killed me if I’d missed it because I had been arrested in Europe – would a true activist be deterred by such things?). It was a great experience, but didn’t do a thing toward our goal of nuclear disarmament – I now live in a town supported by nuclear weapons maintenance (and soon pit production too).

After college I took a job at AFSC which consisted of largely back office functions of one type or another – while defining for my career and personally gratifying work there is an important difference between building the tools activists need to communicate and being the activist. In 2008 I was part of planning a peace conference in Philadelphia as part of the Peace and Concerns standing committee, but it is important to note that I objected to the civil disobedience that was part of that event (being a consensus driven process people feared I would block it entirely – but I stood aside so they could move forward).

Having spent much of my professional life supporting back office functions on nonprofits, and now interacting with DSS as a volunteer who has to be careful about what I share since I have to maintain the privacy of the kids we work with, I struggle to envision myself as an activist. I support activists sure, but I don’t see myself as one.

But when someone tries to insult you with what you often see as a compliment it is worth stopping to reflect. Am I an activist? If I’m not, should I be?

It occurred to me this case worker has a much lower standard of what it means to be an activist than I do – anyone who simply speaks against the status quo in favor of well established laws and precedents are activists in his book. To be fair he’s not far off the suggestion Bayard Rustin, and the committee who helped him write Speak Truth to Power, were making. And as much as I am sure they would deny it, the caseworkers are the most powerful people in the lives of children in foster care: they dictate where the children live, who they can talk to, if/when they see siblings, when they buy clothes, where they go to school, what doctors they see, and without an active advocate they shape how the courts see the children. And right now in South Carolina their power is being tested and reigned in because a group of Guardians ad Litem stood up a few years ago to the rampant systemic abuses. The ramifications of that class action are still being determined, and no one really knows what the lasting effect will be. But this case worker has inspired me to make sure we honor the sacrifices they made (all were forced to stop fighting for the named children because they were “distractions”).

I’m not sure I am an activist, but I promised those kids I would stay with them until the judge ordered me to stop. No matter what taunting I get from the case workers, their bosses, and others within the power structure I can speak truth to power as long as I must.

For the SC DUG meeting this month Will Jackson from Kanopi Studios gave a talk about using Docksal for local Drupal development. Will has the joy of working with some of the Docksal developers and has become an advocate for the simplicity and power Docksal provides.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

A few weeks ago I took a few hours to scratch an itch and went out did a little local photography. I spent much of my time wondering the trails through Aiken State Park. Although there are a few included here from another stop I made, and from the Lunar eclipse on January 21st.

The trees along the entrance to Aiken State Park.

Several parts of the trail were partially under water from recent rains and poor maintenance.

Other places the small bridges were an interesting diversion.

Move bridges in the wood.

Grasses dancing in the stream

Aiken apparently has its own Dalek.

One of several small streams.

Fungus close up, mostly to play with the lens.

More playing with depth of field on the new lens.

This guy was trying to hide in the bark.

This flag has caught my eye several times driving by. Finally stopped to capture it.

The building itself is mostly abandoned.

While getting images of the lunar eclipse I usually get tempted by other night scenes.