Archive

Akamai’s new Sandbox can be run on local development environments, so you can test changes in development with production like CDN settings. This allows you to more quickly identify issues before rolling out to production.

The Akamai sandbox (or DevPoPs) is a Java app (see https://bit.ly/aka-sb-gh). This Java app can be containerised for portability/ease of setup and use.

In the originMappings section:
- from: - the origin hostname in your Akamai property, e.g. origin.example.com
- to - the local/development origin
- secure: true - enabled https on the new origin
- port: 8443 - As the Sandbox is now listening on port 443, the origin needs to be on a different port
- host: host.docker.internal - special docker hostname on mac, which resolves to the host's ip address. This assumes that your dev server is also hosted on your mac.

This setup can also be incorporated into an existing docker compose setup, e.g.

PHP CodeSniffer in our Jenkins CI was always one of the slowest tasks as it ran across our whole code base. LB Denker from Etsy wrote a good piece of software called CSRunner which looked to solve this problem by only running phpcs on files that had changed in the last 7 days (or so). It is written as a PHP script that was run from Jenkins.

I took this idea and adapted it to run in Ant. Instead of looking at files changed in x days, it looks at the checkstyle report from the last run and gets a list of files with problems. It merges this with any files that have changed since the last build. In theory it should bring the run time down (assuming you have a low number of files with problems).

I’m open to any ideas on how to improve this as I’m not that experienced with Ant.

I have been trying to migrate everything in MySQL to use INNODB (death to all MyISAM), but was unsure of how much data was being stored in each storage engine. You can use the following query to give a total usage for all engines:

I’ve been playing around with validator.nu the last few days. I have been trying to get a standalone version working so I could package it up and puppetize it. Unfortunately a lot of the standalone jar builders failed (java hell).

I just a quick survey of the top 500 sites in NZ (based on Alexa data) and I was disappointed to see that only two NZ based sites (excluding Google, Microsoft, Facebook etc) supported IPv6, geekzone.co.nz and nzsale.co.nz (Geekzone implemented its IPv6 via Cloudflare and NZ Sale through Akamai).

Come on people, it’s 2014. There’s no excuse not to support IPv6, especially with two RIRs on their last /8 and APNIC with ~13.5 million addresses remaining. What’s really worrying is that some of the major ISPS (Telecom, Vodafone, Orcon) don’t even have IPv6 on their public facing websites. I’d guess that their residential customers won’t be seeing IPv6 on their connections anytime soon and that CGN is a real possibility.

It seems that vBulletin doesn’t test on PHP 5.4 or 5.5 these days. Either that or they’re happy to just suppress errors rather than actually fix them.

I upgraded my forum today to vBulletin 4.2.2 and noticed these errors on a search page:

Warning: Declaration of vBForum_Item_SocialGroupMessage::getLoadQuery() should be compatible with vB_Model::getLoadQuery($required_query = ”, $force_rebuild = false) in …./packages/vbforum/item/socialgroupmessage.php on line 261

Warning: Declaration of vBForum_Item_SocialGroupDiscussion::getLoadQuery() should be compatible with vB_Model::getLoadQuery($required_query = ”, $force_rebuild = false) in …./packages/vbforum/item/socialgroupdiscussion.php on line 337

There seems to be a bug open for this at EPEL (BugÂ 807816 – Xalan-c segfaults on any input), but it has not been acknowledged or worked on.

I traced the problem to an incompatibility between xalan-c 1.10 and xerces-c 3.x. There is a patch as part of the EPEL xalan-c rpm which is meant to allow for this, but it seems broken as the source rpm didn’t compile for me.

An easy fix here is to upgrade both xalan-c and xerces-c to the latest version. I hacked together rpms for these based on the work already done in EPEL:

The new network is based on their existing CDN technology, but built on an entirely separate network infrastructure tuned specifically for the site acceleration and transaction needs of online retail sites. In other words, itâ€™s aimed at enterprises tired of sharing a least-common-demoninator fast lane with everything from cute cat videos to gaming updates to whatever it is kids listen to these days.

Am I the only one who reads this as a lack of confidence in their core CDN product or are they trying to differentiate themselves from other CDNs? To me a CDN should be able to handle any traffic that you throw at it and if you are getting slow downs then it’s time to find a new CDN.

I would rather my CDN put more time and money into their core product than branching off and building a completely separate network. What’s next, a sports CDN? News CDN? Porn CDN?

Recently I have seen “kraken-crawler/0.2.0” hitting my site. This is a bot used by Kontera (advertising company) to “better understand and analyze your siteâ€™s content” (according to their support staff).

Apparently the crawler adheres to robots.txt so you can block it by adding:

User-agent: kraken-crawler/*
Disallow: /

The URL to their crawler info is broken so it’s hard to get an idea of what this is used for. If you are also seeing this bot, hopefully this helps you.

Microsoft have finally released IE10 for Windows 7. It seems their download page (http://windows.microsoft.com/en-us/internet-explorer/downloads/ie-10/worldwide-languages) is getting pretty hammered. Looking at the requests on the page it seems that everything is held up with a request to ajax.microsoft.com. The page loads the template header, but no more. Surely in the day and age a company like Microsoft would load their ajax async and prevent a single script from taking down the page.

Update: it seems this is a problem with the latest build of Firefox’s Aurora. Twitter is experiencing a similar problem with one of their scripts, so there may be a problem with Firefox’s script engine.

Amazon have released their new application management tool OpsWorks. This uses Chef to deploy and maintain instances on AWS. While it looks neat and I’m sure it will work for startups it’s not something I could trust. I still like to get my hands dirty with server deployment and I try to use bare metal rather than virtual instances where possible. Also, from what I’m reading this tool is still very much a “beta” and is quite buggy.

The tool itself is not revolutionary, there are many other systems out there that do a similar thing. What is interesting though is that Amazon is offering this, once again improving the tools available without the need to use a 3rd party. Will this kill off competition or prompt the current providers to lift their game?

OpsWorks has brought up an interesting question. Now that AWS is using Chef and they have thousands of developers/sites using them, will Chef become the defacto standard and will other configuration management systems die out? There is a rumour that Amazon might offer Puppet support along side Chef, but that’s just a rumour for now.

Personally I think Chef will increase in popularity due to OpsWorks, but I don’t think Puppet et al will die away. Each system has their own merits and devs/ops will use whatever suits them and their environment.

There’s always been a problem with Oracle provided MySQL rpms and older Centos/RHEL MySQL rpms. The former provides “MySQL” and the latter provides “mysql”, so a lot of the packages in Centos/RHEL require “mysql” which creates some conflicts.

A quick way to fix this is to use rpmrebuild -e -p and change the “requires” from “mysql” to “MySQL”. Hopefully in the future Centos/RHEL will be standardized with the Oracle naming convention or Oracle packages be “backwardly” compatible.

We just provisioned a new server with Sandy Bridge and 4 SSDs in RAID 5 configuration. The server it was replacing was seriously under powered so this is a timely replacement. I ran hdparm on both servers to compare:

It seems that the latest versions of vbulletin are very broken in PHP 5.4 even though they state that “vBulletin 4.x requires PHP 5.2.0 or greater and MySQL 4.1.0 or greater”

Most of the problems are from E_STRICT which is part of E_ALL in PHP 5.4, but vBulletin and Internet Brands (who own vBulletin) seem very slow to fix these problems. They even denied that it was a problem with vBulletin when I originally reported some of the errors in June 2012 stating “Closing this issue because it appears to be unrelated to vBulletin code.”

They have since reopened the issue and it has been rolled up in a PHP 5.4 check task, but seems quite slow being that PHP 5.4 was released nearly a year ago and PHP 5.5 is due out soon.

So to get vBulletin working without errors on my sites I have to modify and fix all of these problems. I wish I could contribute back to vBulletin or to its users so that this effort is not duplicated, but there doesn’t seem to be a way to do it (hosting files on here would violate copyright).

I recently had a database server fail during a large DELETE query, this caused some problems with innodb’s ibdata1. The index of this data file was different to what MySQL expected. As this wasn’t one of our main servers I hadn’t tuned innodb and all the innodb data was in the single ibdata1 file. The only way for me to start MySQL was to add this to my.cnf:

innodb_force_recovery = 4

This forced MySQL to ignore all innodb errors and I used mysqldump to extract all the data from the innodb tables. Innodb tables were found using the following query: