Author

We are in the process of transforming the way we host our applications to a docker based workflow. One of the challenges we face is the file storage. At the heart of our business are open source technologies and tools, therefore we have looked into in using Minio (more or less the same as Amazon S3 for file storage) instead of local filesystem (or Amazon S3).

We are going to use the Drupal module Flysystem S3 - that works both with Amazon S3 and Minio (compatible with the Amazon S3).

Flysystem is a filesystem abstraction library for PHP which allows you to easily swap out a local filesystem for a remote one - or from one remote to another.

For a new site it is pretty straight forward, for a legacy site you need to migrate your files from one storage to another - that I am going to look into in the next blog post.

Minio container

First we need Minio up and running. For that i am using docker, here is an example docker-compose.yml:

Settings

When you have installed the Flysystem S3 module (and the dependency - the module Flysystem), we need to add the settings for Minio to our settings.php file (there is no settings for this in Drupal. Yet.):

Endpoint is for communicating with Minio, cname is the base URL that files is going to get on the site. Serve_js and serve_css is for Minio to store aggregated CSS and JS.

Create a field

You now need to define which fields are going to use the S3 storage, for this, I create a new image reference field, and use “Flysystem: s3” as the Upload destination.

Surf over to Minio - our example is on http://minio.mysite.com:9000 - add the defined bucket, my-site, and make sure that drupal can write to it (edit policy in Mino and make sure it has read and write on the prefix - or for wildcard prefix - *)

And you are done

And that is it - now we are using Minio for storing the images. Try to upload a file on the field you created - and you should see the file in Minio. Also on the site - you should of course see the image - but now with the URL used in the settings for CNAME, in our case, minio.mysite.com:8001.

We have put some time and effort into the Flysystem S3 module together with other contributors, and we hope you will test it out and report any feedback. Have fun!

One week ago, we received a warning that a critical security update for Drupal, which affected drupal 7 and 8 (and even 6, which is not supported anymore) was going to be released today. And we braced ourselves for updates.

A couple of years ago, it was a hard work for us to update a site if a security update was released. Nowadays our hosting and our processes are much better and simpler - and thanks to a team effort by our Live-team at Digitalist, we got our most vulnerable sites patched minutes after the security fix was released.

Digitalist Live Team patched in total around 1700 sites on our own hosting in less than 2 hours!

If you read the FAQ for the security issue - it is really critical to update - if the vulnerability is exploited all non-public data is accessible, and all data can be modified or deleted. Simply put - your site could be immediately hacked and taken over by someone else.

It is good to remember that the vulnerability has not been exploited anywhere that we know of. But after discolsure of a vulnerability, "black hat" hackers will immediately try to exploit Drupal sites. That is why it so important to act quickly and apply security updates once they become public.

Drupal is one of the most secure CMS systems available - and it stays that way due to its robust vulnerability-handling process.

So last week we went with a big crowd (16 of us) from Digitalist to Drupalcon in Vienna to join the about 2000 other attendees. We went to sessions and BOF:s about caching in Drupal 8, Symfony components in Drupal core, Docker, Config split, Decoupled Drupal, Multi-sites in Drupal 8 and a lot of other things.

Driesnote

[embedded content]

A standard thing at Drupalcon is Dries talking about The state of Drupal - called the Driesnote – that means talking about all the good things we have in Drupal and in the community and what is working, and what we need to work on to make Drupal and the community even better. One great thing to hear is the result of the survey that the Drupal Association sent out to companies working with Drupal – a lot of the companies around the world is going very well – over 48.5% of the Drupal companies has a growing sale – and Drupal deal sizes are also growing for them – for 47% it is getting bigger.

Dries also talked about the workspace-project – that you could work on a part on the site with a bunch of content – and see how it looks – without publishing it – a big step forward for Drupal. Also some talk about adding a JS-framework to get a better UI experience in the admin parts of the site.

Contenta

[embedded content]

Decoupling Drupal (also know as headless Drupal) has been something that has been done for years now – and one really nice intiative is Contenta CMS, that try to build a best practice setup for decoupling Drupal – and on their session they presented some of the work they have done – with an easy setup to make frontenders that never has worked with Drupal to get started on which end points to use etc. If you are starting a project with a decoupled Drupal – I recommend you to check Contenta CMS out, with ready examples using JS-frameworks like Angular, React etc.

ELM

[embedded content]

There were a bunch of sessions discussing different uses of frameworks that could be used as an frontend for Drupal – one I find very interesting (being mostly a backender myself) is using ELM, presented on Drupalcon by Amitai Burstein – who the latest four years have build different solutions for decoupled Drupal.

Caching, a guru-guide

[embedded content]

Wim Leers is one of the core contributors that knows most about the caching layer in Drupal – and his session about caching in Drupal 8 was one of the most interesting in Vienna – there a lot of stuff to think about in caching – and his session went thru the most of them.

Moving configuration around

[embedded content]

The Drupal 8 configuration system is a hard nut to crack for some projects – and one of the solutions that are getting more and more attention is config split – used for handling configuration differences betweween different environments (dev → stage → prod). Fabian Bircher did a crowded presentation about his brain child and were also part of a BOF initiatives by the Swedish university Lunds Universitet, talking about the problems and solutions using Drupal 8 for multi sites setups.

Humanized Internet

[embedded content]

External keynote speakers has been a long tradition – and this year we had Monique J. Morrow talking about the Humanized Internet – with a lot of focus on personal security and the Internet – one of the examples she brought up about this was the Swedish data breach inside Transportstyrelsen, and the upcoming GDPR regulation that helps protecting personal data inside EU.

If you just want your content to be cached before Drupal 8, there were almost no problems, just turn on caching for anonymous users, and you are all set. Muhahhaha! Who am I kidding...

If you want to interact with users with different content depending on the user, role etc. You got problems, if you want to invalidate the cache, you got problems. If you want to show a View of nodes, and turn off caching of that view, but had caching for anonymous users, you got problems. And so on. The most caching issues were solved by clearing all the cache, which could bring down the site if your unlucky.

So the real problem we wanted solve were not caching per see, it was cache invalidation. And like Phil Karlton supposedly said in the early nineties “There are only two hard things in Computer Science: cache invalidation and naming things.”

Some sites solved the caching issue before Drupal 8 with just turning of the cache completely, and scaled up the environments instead, and spent a lot of money in doing so. I have seen some high traffic sites with almost no cache logic in place, because it was to hard to get the cache invalidation to work. Those who worked hard on getting the cache and the cache invalidation to work smarter used modules like expire and purge, and integrated with rules solve complex cache invalidation. But is was almost impossible for any Drupal module to know where any content were used on a site. And that is what we want from smart cache invalidation.

Almost the only case that the default caching worked with no issues before D8, was if your site just were only one node, and some static blocks.. And if you updated that node, the cache of that node will be invalidated (hopefully). A normal sized site has hundreds and thousands of pieces of content, relations to other content, has listing of nodes etc. So it was real hard before D8.

So what we needed for cache in Drupal 8 is for Drupal to be aware on what cache and where it is used an in which context. Cache all things (aka. Fast by default), and make cache invalidation easier (aka Cache tags). And we got it. And let’s dive into what we got in the next blog post.

This is our second part of our ongoing series: Caching in Drupal 8, first part you could find here (with links to blog posts published so far).

The plan is to finish this series in a months time, with a couple or more of blog posts per week. Many parts of the series is loosely based on a session I did for DrupalCamp Northern Lights this February. But, here I will have the time to go into more detail, and with more code examples.

We started early to work with Drupal 8, our first project was this site, and after that we started to deliver Drupal 8 sites to our clients. We learned a lot during all the projects, and hopefully I will be able to share to you all the stuff that the learned about the caching layer in an understandable way. So this knowledge is based on trial and error on real projects.

Target audience for this blog series is both backend and frontend developers.

Looking for a job?

We are always looking for talents. In fact, our last 5 employees moved to Stockholm just to work with us. Will you be our sixth?

As for Purge module 8.x-3.0-beta5, creation of cache tags is removed from the Purge module itself, and should now be handled by purges instead – so from Varnish Purge 8.x-1.4 we now have a sub-module, Varnish Purger Tags, that handles the cache tags.

Next Wednesday you are all welcome to our Drupal Meetup at Wunderkraut, where we will talk about caching in Drupal 8 And drink some beer.

It has been a while, but now it is time again for a Drupal Meetup in Stockholm, and it will be at the Wunderkraut office in Stockholm, signup for it here. There will be beer, mingle, talk about caching in Drupal 8, and more. Bring a friend or two!

Out of the box Drupal 8 has the header of a page request set to X-Frame-Options: SAMEORIGIN, that means that many modern web browsers does not allow the site to be framed from another domain, mostly for security reasons. This is good in many cases, but some web browsers has problem with this, and X-Frame-Options is deprecated in favor of using Content-Security-Policy.

So why do you need a header like that? It is mainly for protecting a site for what is called Clickjacking

Also, for some cases you want you site to be framed into another, and doing that out of the box with Drupal 8 is not possible in most modern web browsers if you don't alter the sites header in apache, nginx, varnish or in some other way. We are now going to look into doing in “some other way”, in this case with Drupal. I prefer using Drupal to control site headers because of the sites header is a part of the application.

In our developer workflow we install the site continuously during development, as a health check of the code base. One of the problems we have in this workflow is when we declare a service that belongs to a module that is not activated yet during the install, like Memcache module.
In our developer workflow we install the site continuously during development, as a health check of the code base. One of the problems we have in this workflow is when we declare a service that belongs to a module that is not activated yet during the install, like Memcache module.
This is what a site needs normally in settings.php,
$settings['memcache']['servers'] = ['localhost:11211' => 'default'];
$settings['memcache']['bins'] = ['default' => 'default'];
$settings['memcache']['key_prefix'] = 'foo_bar';
$settings['cache']['default'] = 'cache.backend.memcache';
This works perfect to add on a runnings site when the module is activated… Read More

As for Purge module 8.x-3.0-beta5, creation of cache tags is removed from the Purge module itself, and should now be handled by purges instead – so from Varnish Purge 8.x-1.4 we now have a sub-module, Varnish Purger Tags, that handles the cache tags.
As for Purge module 8.x-3.0-beta5, creation of cache tags is removed from the Purge module itself, and should now be handled by purges instead – so from Varnish Purge 8.x-1.4 we now have a sub-module, Varnish Purger Tags, that handles the cache tags.
To use Varnish Purge with cache tags, from version 8.x-1.4, you need to activate the new sub-module. And also you need to reconfigure the Purger – because we renamed the Header for the cache tags to Cache-Tags, because we see no point calling it something else, so you also need to update your VCL file, if you using the older settings – Purge-Cache-Tags – for… Read More

In this blog post I am going to go through a step by step setup for using the Varnish purge module together with Purge and Drupal 8.
Please read this about updates done since the blog post were written.
In this blog post I am going to go through a step by step setup for using the Varnish purge module together with Purge and Drupal 8.
We have started to work on the Varnish purge module and we are using it some of our projects. The Varnish purge module is a fork of the Generic HTTP Purger with some minor changes.
To use Purge and Varnish Purge you need to have a working setup with Varnish already, for an example VCL, see the end of the blog post.
First install Purge and Varnish purge modules. For Purge to work normally you also have to install… Read More

The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options.

You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.

You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV. To use prod environment you could set that variable in the terminal to prod with:export DROPCAT_ENV=prod

Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:dropcat backup --env=prod

That will use the dropcat.prod.yml file

By default dropcat uses dropcat.yml if youi don't set an environment.

Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:

The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb.

appname

This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).

local

These are the settings from where we deploy, it could be localy, it could be a build server as jenkins.

tmp_path

Where we temporary store stuff.

Seperator

Used for i name of foler to deploy as seperator like myapp_DATE

drush_folder

Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.

Remote

server

The server you deploy you code too.

ssh_user

User to use with ssh to your remote server

ssh_port

Port used to use ssh to your server

identity_file

Which private ssh-key to use to login to your remote server

web_root

Path to which your site is going to be deployed to.

temp_folder

Temp folder on remote server, used for unpacking tar file.

alias

Symlink alias for you site

Site

drush_alias

Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.

backup_path

Backup path on ”local” server. Used by dropcat backup

original_path

Existing path to point a symlink to – we use for the files folder

symlink

Symlink path that points to original_path

url

URL for you site, used in drush alias

name

Name of site in drush alias.

Mysql

host

name of db host

database

Database to use

user

Database user

password

password for db user to host

port

Port to use with mysql

We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.

The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options.

You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.

You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV. To use prod environment you could set that variable in the terminal to prod with:export DROPCAT_ENV=prod

Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:dropcat backup --env=prod

That will use the dropcat.prod.yml file

By default dropcat uses dropcat.yml if youi don't set an environment.

Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:

The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb.

appname

This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).

local

These are the settings from where we deploy, it could be localy, it could be a build server as jenkins.

tmp_path

Where we temporary store stuff.

Seperator

Used for i name of foler to deploy as seperator like myapp_DATE

drush_folder

Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.

Remote

server

The server you deploy you code too.

ssh_user

User to use with ssh to your remote server

ssh_port

Port used to use ssh to your server

identity_file

Which private ssh-key to use to login to your remote server

web_root

Path to which your site is going to be deployed to.

temp_folder

Temp folder on remote server, used for unpacking tar file.

alias

Symlink alias for you site

Site

drush_alias

Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.

backup_path

Backup path on ”local” server. Used by dropcat backup

original_path

Existing path to point a symlink to – we use for the files folder

symlink

Symlink path that points to original_path

url

URL for you site, used in drush alias

name

Name of site in drush alias.

Mysql

host

name of db host

database

Database to use

user

Database user

password

password for db user to host

port

Port to use with mysql

We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had.

What we had - Jenkins and Aegir

Since some years we have been using a combination of Jenkins and Aegir to deploy our sites. That work-flow worked, sort off, well for us. And because it was not a perfect match we tried to rethink how we should do deploys with Drupal 8 in mind.

Research phase

We looked in many directions, like Capistrano and Appistrano, OpenDevShop, platform.sh, Aegir 3 etc. But none of them fitted our current need – we wanted to simplify things, and most of the tools just added another layer that was not a perfect fit for us. Also, it was important to us that the solution should be open source.

We went old school and built our own solution – almost.

Re-use and invent

With Drupal 8 we got to know Symfony in a better way, and Symfony has a console, that also is used by Drupal console project. The advantages in using Symfony console for a base for our deploy flow were big, based on Symfony best practice and using open source projects. Also, drush does a lot of stuff that we need in the deploy process, so that is an important part also. We did not want to re-invent stuff that already worked well.

Enter Dropcat

So we started to build Dropcat (Drop as in Drupal, and cat because… because of cats) and we slowly added more and more stuff to it, and now we have most part of the commands that we need to do a normal deploy, we are still working on one important bit – and that is the rollback – and hopefully when this series of blog posts about Dropcat is finished, we have that in place also.

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had.

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had.

What we had - Jenkins and Aegir

Since some years we have been using a combination of Jenkins and Aegir to deploy our sites. That work-flow worked, sort off, well for us. And because it was not a perfect match we tried to rethink how we should do deploys with Drupal 8 in mind.

Research phase

We looked in many directions, like Capistrano and Appistrano, OpenDevShop, platform.sh, Aegir 3 etc. But none of them fitted our current need – we wanted to simplify things, and most of the tools just added another layer that was not a perfect fit for us. Also, it was important to us that the solution should be open source.

We went old school and built our own solution – almost.

Re-use and invent

With Drupal 8 we got to know Symfony in a better way, and Symfony has a console, that also is used by Drupal console project. The advantages in using Symfony console for a base for our deploy flow were big, based on Symfony best practice and using open source projects. Also, drush does a lot of stuff that we need in the deploy process, so that is an important part also. We did not want to re-invent stuff that already worked well.

Enter Dropcat

So we started to build Dropcat (Drop as in Drupal, and cat because… because of cats) and we slowly added more and more stuff to it, and now we have most part of the commands that we need to do a normal deploy, we are still working on one important bit – and that is the rollback – and hopefully when this series of blog posts about Dropcat is finished, we have that in place also.

Ok, so now we have a wysiwyg-editor in drupal 8 core, but if you want another editor, like something used on medium.com?

I have done som intial work to get the medium clone inside drupal 8, and have now setup a sandbox on d.o and commited that to medium module. Please test it out if you are interested. The further plan of the module is to get a working media solution working with it, and if you are skilled on js (I am not :-)), and you feel you want to contribute...

Blog post author

The editor used to edit posts at medium.com is a real slick, and I find it interesting and intuitive. Davi Ferreira have made an open source clone of it, so it could easily be used in other places.

@cweagans have done great work to get the Medium editor in it's own module, but I would rather myself have it inside the WYSIWYG API. so I took some parts of his work and did a patch, so if somebody else finds it interesting to get this editor to work with WYSIWYG API, please try it out, test, review, throw stuff at it...

As a first step I just added the text editing part, with further plans on try it to get it to work with Asset for images, videos etc.

Blog post author

I looked for a simple debugger to use in the fronted, and discovered PHP-ref.

I like to keep things simple and clean. I like devel when using simple debugging on the frontend (for more advance debugging I am using xdebug), put devel adds a lot of overhead - like many drupal modules it tries to cover a lot of issues, and therefore it adds a lot of things you maybe not need. So I tested some of the debuggers you could use with php in the frontend, and I really like PHP-ref - simple and clean, just the way I liked it.

So now I have wrapped it out to a moudle - Ref debug. With it you could do debugging on entities and direct in code as an replacement for devels dpm() - instead you use r(), and to that you could add functions, variables, classes etc.

When you work locally on development or test on stage/dev whatever you sometimes needs the files from production. Our old way in solving that is downloading the whole file catalogue and have it local. Sometimes the file catalogues where several gigabytes large so that is not a good workflow at all.

To solve that problem we are now using Stage file proxy. We have been using it for some time now to get files to stage or locally, and it works really well (we have some issues on D6-sites, but works almost flawless on D7-sites). Stage file proxy downloads the files that are requested from production (or whatever) site running environment.

So you just get the files you need, and you could easily delete the files locally and get them back when called for. Time and space saver.

A nice patch to the module is getting an admin-interface - so that you don’t need to add settings to settings.php or creating variables. EDIT: Greggles just committed that patch to dev :-)

So I did some work on the Drupal module Semantic fields, to make it exportable. Did some patches, uploaded them and everything seemed ok, not so much activity in the issue queue so nobody else tested my stuff. So I deleted my folder with Semantic fields so I could test the patches myself. Then I realised I forgot to add important files to my patches - the new files with the exportability.

Thanks for this very latest screencast. I am using field collection on nearly all of my projects. Place where I get stuck is themeing the form of nested field collections. Default layout of the field collection fields on a form contains similar css classes and hard to handle when field collection has unlimited value and are nested.

For example: A candidate creates a cv on website using Profile 2 registration form.

My requirement is to theme all elements on one line and the should keep their position when a new instance is added. I tried using CSS but can only theme one instance of f. collection. When a new instance is added they fall back to default position. Not a good solution.

Please let me know if you have worked with render able elements with Display suite on forms? How can that help in this situation. I have seen a screencase by author of DS but that examplifies only CCK node edit form not profile 2 or f. collections.

A quick way you could debug which panels are slow, is to install the contrib module Panels, Why so slow?

A little bit missleading name though - you don't get information about why the panels are slow, just how fast each panel is rendered, but it is a good start for debugging. Seem like the developer, drweish, are not planning any further development of the module, since it is marked as Unsupported. But hey, it seems to work.

A note though, my experience with Panels is not that they are slower than any other way to build sites with Drupal, just for the record.