Author: msg_staff

The discourse forum has built-in tools to allow you to move system backups to S3 automatically for off-site storage. If you don’t want to use S3 you have to do this manually.

Following is a script that you can run via cron to upload the most recent backup from discourse to google cloud storage and remove the one before that (if that’s what you want to do). You’ll have to update for your own environment, although the backups should be in the same place on all? installations of discourse.

#/bin/bash # This script will copy the latest discourse backup to google cloud # Then it will remove the previous backup from google cloud

Serverpilot is an automated server building solution, similar to forge from the creators of laravel. These are great solutions for those of us who want to focus on building applications and programming and not on server maintenance. While the server setup is not difficult, particularly for one or a few sites, if you have many sites on your server and many development environments with multiple ssl certificates, etc. these tools can save a lot of time and headaches.

While this is written for serverpilot, the majority of it is applicable regardless of what solution you use to manage your server, even if you set them up yourself. Serverpilot will also run on vultr, digital ocean, or other vps providers, so there might be some other tailoring as well. Additionally, while serverpilot is often used with WordPress sites, this script will backup all databases and all files so it will work for drupal, processwire, anything you have on your server.

While serverpilot handles many aspects of managing your web sites, it doesn’t provide any backup services. And while, vultr and digital ocean, for example, offer server wide backups these are impractical for many usages, particularly on servers with multiple sites for different clients.

On to the steps:

ssh into your serverssh username@servername.com

su to root on your server and cd /tmpsu -root

Now download and setup the command line tools for amazon web services (AWS)curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"unzip awscli-bundle.zip ** note that on serverpilot administered servers unzip is not available by default so you will need to apt-get install unzip./awscli-bundle/install -b ~/bin/aws

Now you need to configure with your keysaws configure ** if you don’t know how to get your keys, read this.

Now you should be able to do a command like aws s3 ls (dir list of s3 buckets), if you cannot, you need to give the user associated with the keys you entered in the configuration appropriate s3 permissions.

Now we are going to setup the database backup script. In serverpilot by default you have .my.cnf file which contains your root mysql password. This means that commands like mysqldump do not require a password if run by this user. You can create this file manually, or better, use mysql_config_editor (5.6+) to create mylogin.cnf which hashes the password.

Enter the following into this file (modify as needed). For the most part this would simple be changing the “ExcludeDatabases” line to contain any other databases you don’t want to backup. If you only have one database you can drop all of that and just use the mysqldump command for that one database. This script assumes you have multiple sites setup, otherwise you probably wouldn’t be using serverpilot anyhow… #!/bin/bash ExcludeDatabases="Database|information_schema|performance_schema|mysql" databases=`mysql -u mysqladminuser -e "SHOW DATABASES;" | tr -d "| " | egrep -v $ExcludeDatabases` for db in $databases; do echo "Dumping database: $db" mysqldump --add-drop-table -u mysqladminuser $db | bzip2 -c > /whereyouwantthem/backups/`date +%Y%m%d`.$db.sql.bz2 done

Now we will open the script and clean it up a bit and add our tar backups and then copy everything off to s3. So nano sitesbackup.sh and the entire script is below:

#Copy Everything in the Directory to S3 for offsite backup #Occasionally you should delete some of these on aws to save space /usr/local/bin/aws s3 cp $BackUpDir/ s3://bucketname/$TodaysDate --recursive --include "*.bz2" --region your-region-1

That’s about it, obviously you should tailor this for you own needs and also clean up your aws buckets from time to time so that you are not paying for storage you don’t need. I hope this is helpful for others.

CKEditor is a the default editor on the processwire CMS. Generally speaking it is a well functioning editor, but it does have some advanced features which can lead to frustration.

One of these is the advanced content filter. Settings for this allow certain attributes which would normally be removed upon saving your document.

One very commonly used element is the id attribute – i.e.

a id="linktoheading"

By default CKEditor will strip this out as it is setup in processwire. To change this you need to edit the field for which you would like this tag left in, click on the “Input” tab, and then scroll down to “Extra Allowed Content” and enter the following:

a *[id]

This allows the id attribute (optionally – “*”).

Another commonly used tag that is stripped out is classes on div tags. In the same box as above you could enter the following to allow, for example,

During a recent migration from a hosted ecommerce solution to woocommerce we had to move a large number of product images. With a combination of the woocommerce csv export and csv import modules moving large numbers of products is quite easy. That’s for the data.

For the graphics you cannot just upload graphics into the wordpress media library and have them immediately recognized so you need to do a few things first.

These notes are for a project converting a relatively complex drupal 6 site to processwire. This particular site had a number of relatively complex interrelationships between the data that had to be accounted for. It had also come through multiple versions of drupal over the years had some cleaning up to do.

Because of the amount of data that needed to be cleaned wide use of csv files was more useful than a straight database to database transfer. More will be added to this later.

SQL Snippets

Creating current drupal taxonomy links on pages to their processwire counterparts (some other work before this with views exports, csv imports and then creating a temporary file holding the relationships. The excel commands =SPLIT(A1;”,”) and =TRANSPOSE(A1:Z1) to convert the views csv export to rows is helpful.

For many reasons either when moving between different cms systems perhaps or for more general reasons it is useful to import data with csv files. Often this is because it is easier to manipulate the data in a spreadsheet and then create mass updates.

phpMyAdmin doesn’t allow you to update data from a CSV file directly. You can import the data into an empty table, but you cannot update existing records. To get around this, you can follow these steps:

Export the data that you need from the current table – in part so you can see how it is structured.

Then edit this file in a spreadsheet program. After you changes are finished then export the file as csv. In this example we are updating references to page urls in a table “pages” and our data file has simply id and a url instead of all the data.

In phpmyadmin create a new temporary tableCREATE pages_url_update LIKE pages (which would create a replica -- or just create the columns you need -- in this example an id and a url)

Open that table in phpMyAdmin and then choose “import” — Select the csv to upload — And choose CSV Using LOAD DATA under Format