Tech Industry

The Yii Framework team recently released their second major version, 2.0. The version is still a little rough around the edges, but my first impression is that it is a solid replacement for their last major version, 1.1.

ActiveRecord has been given a facelift in 2.0, and one of the 1.1 features – scopes – isn’t mentioned until the very end of the ActiveRecord guide page. While it was convenient to declare scopes in 1.1 to filter results down to the ones that matter to you at the moment, it also mixed much query logic inside the model. The developers made the decision to require the use of a separate ActiveQuery class if you want to use a scope-style pattern.

In practice, there isn’t much new work here – just another class that you need to create and an override of your model’s find function. Check out the pattern below, as detailed in the Yii documentation.

Over the years, we have inherited a number of databases, most utilizing MySQL. Modifying those tables can lead to some odd situations, receiving errors that we shouldn’t, and usually with MySQL, receiving cryptic errors. One recent database upgrade had us scratching our heads for a few minutes –

This database had foreign keys using the format ‘fk_keyName’. All of those indexes had foreign keys, and we had upgraded numerous other tables in this database, dropping some foreign keys and replacing them with an optimized schema. However, upon dropping one particular foreign key, we received the following error:

General error: 1025 error on rename of ‘.\databaseName\tableName’ to ‘.\databaseName\#sql2-aaa-aaa’ (errno:152)

After double-checking the index name and confirming it was correct, we ran ‘SHOW CREATE TABLE’, and noticed that although this foreign key was an index, it did not have a foreign key constraint associated with it. So in this one instance, the foreign key was only an index, most likely a mistake by the original authors.

Hopefully, this will save just a few minutes for someone out there, or save us a few minutes if/when we run into the situation again.

Rackspace reported their first quarter earnings Wednesday, and the market did not like what it reported. At market close Thursday, the stock had suffered nearly a 25% loss. At the same time, there was some good news in that revenue is up 20%.

Hollow Developers has always been a great fan of Rackspace, and we use their cloud server products as our failover systems. However, we were forced to other cloud server providers for our everyday hosting needs primarily due to cost. Amazon Web Services (AWS) has the hefty weight of Amazon behind it, allowing AWS to adopt Amazon’s aggressive pricing models, undercutting prices of competitors by a fairly substantial margin. There has always been a premium cost to do business with Rackspace, and this premium is definitely worth it for their ‘fanatical’ customer service. However, there are some businesses that cannot afford that premium, or don’t depend on the customer service enough in order to rationalize paying the premium. In our case, interaction with customer service is so limited that a monthly premium doesn’t make sense.

So, how does Rackspace turn things around?

First, place benchmarks on their cloud servers instead of relying on third-party websites to compare their CPUs against Amazon’s. In our experience, AWS virtual machines don’t have the CPU horsepower that some would expect. If Rackspace doesn’t exceed Amazon’s performance for comparably priced servers, bump up virtual machine resources until they do. This metric is the most important for many websites that rely heavily on dynamic pages, and can become a deciding factor in choosing one service over another. It is not immediately apparent that Rackspace’s entry-level server actually outperforms Amazon’s closest alternative when comparing CPU and price.

Second, allow more administrative actions to be performed via the Rackspace control panel. A Scalr-like setup for auto-scaling and easier cross-region deployments would be a huge value-add to the product. With Scalr running $100/month, some basic built-in scaling could be a deciding factor in using Rackspace versus a competitor. Alternatively, partner with Scalr to provide some sort of discount to the service. Rackspace customers already receive discounts at a number of other websites, and placing a cheaper Scalr monthly cost on to the table may be a deciding factor for those wanting to grow their cloud footprint.

Finally, take a page out of Amazon’s book and add a free or nearly-free tier to Cloud Servers. Allowing people to sample the product, even for a few months, would invariably lead to some people remaining on the platform instead of seeking out competitors. Pairing this with a Scalr partnership would make Rackspace that much more attractive.

Surely, there are other things that Rackspace could do to regain the confidence of investors, but these seem to be the most important among small businesses with small but expanding cloud footprints like ours.

* Disclaimer – Hollow Developers is a small investor, customer, and former affiliate of Rackspace.

Over the past few months, Hollow Developers has migrated servers into the Amazon EC2 environment. As part of this setup, a load balancer redirects traffic to a number of individual EC2 web server instances. A limitation to this, however, is that Amazon’s load balancers don’t work on root domains (for example, http://hollowdevelopers.com/, no www in front). The reason that these load balancers don’t work on root domains is because the DNS record must be a CNAME record, and not an A record. And, root domains at most DNS providers only allow A records.

CNAME and A Records: CNAME entries allow domains to create subdomains like ‘webmail.hollowdevelopers.com’, which can act as an alternative address to something like ‘google.com/a/hollowdevelopers.com’ – the CNAME record makes that long URL at another domain easy to remember. A records only allow IP addresses. Amazon Load Balancers require an entry like ‘hollowdevelopers-load-balancer.ec2.amazon.com’, so a CNAME entry is required.

So, this ultimately requires websites to use ‘www’ or something similar in front of their domain, since the ‘www’ record can be a CNAME record. As part of their sales pitch for their Route 53 DNS service, Amazon mentions that Route 53 allows you to place CNAME-type records into your root domain. However, we have always been happy with our DNS provider, CloudFlare. So, what is an easy way to ensure that all traffic goes through our load balancer?

On first glance, Hollow Developers was OK – our web servers automatically redirect users from the root domain to the www domain, primarily for consistency for search engine crawlers. However, in order for this to happen, the user would have already hit our server on the root domain. We wanted all traffic to go through the load balancer, regardless of the small number of hits that may come in through the root domain. This is where CloudFlare’s page rules came in.

CloudFlare page rules allow website owners to write redirect rules, allowing all traffic from the root domain to redirect to the www domain. Best of all, even free CloudFlare accounts allow a few page rules, meaning that anyone can use this trick for a free alternative to Amazon’s Route 53. Just a few rules will get you up and running:

The first rule will forward all pages on the domain to the exact same page on www. The second rule forwards the ‘naked’ root domain to the www domain. For more information on the syntax used, consult the CloudFlare documentation on the Page Rules interface.

There are numerous alternatives to this approach – including the use of Amazon’s Route 53 DNS service. However, we wanted to keep CloudFlare’s security and DDOS prevention features, so this was not an option we wanted to take. Have other alternatives? We would love to hear your comments/questions.

HollowDevelopers uses the Facebook Comments for WordPress plugin, which helps reduce comment spam and allows for more active user engagement. It has worked well for the past few years, but the plugin hit the deadpool a while ago, with no updates since 2011.

For a while now, the plugin has stated that the application ID entered in the settings was invalid. This wasn’t true, obviously, as the comments functionality still worked. So, the error was more of an annoyance than a real problem. However, after a few minutes of research, a quick resolution was found in the comments on the plugin’s homepage via a contributor.

It turns out that a one line change in the plugin code will fix it:

Open facebook-comments-admin.php code. Scroll down to line 78 — $needle = ‘wall’;

Replace the line with — $needle = ‘<title id=”pageTitle”>Facebook</title>’;

The recent shutdown of GoDaddy’s cloud product shows how hard it can be for even a well-funded cloud infrastructure start-up to compete against the behemoth that is Amazon Web Services (AWS). From the biggest names in tech (Netflix, Reddit) to some of the smallest, companies depend on AWS to power their cloud server infrastructure. However, even as Amazon dominates the market, they are not resting on their laurels, introducing new features and price drops nearly every week.

Amazon’s latest announcement is the ability to copy EC2 AMIs across different AWS regions, allowing server administrators to store these server images in different regions. Why is this important? Take the recent outage that took down Netflix and several other large websites. The outage affected Amazon’s US-East region, but many other regions exist across the world, and those regions were still online.

For the small guys that can’t afford a full sys ops team, keeping EC2 AMI images on standby in other regions can allow for a quick failover with minimal cost. You only pay for the size of your AMI images, and can bring servers online only if they are needed. (Databases are another story, and may require some multi-master replication strategies, but that’s another blog post.)

At Hollow Developers, our mission-critical applications are hosted on AWS with a hot backup waiting to go online at a separate hosting company in case of an AWS failure. As AWS offerings continue to increase as prices decrease, the choice of hosting the hot backup in a different AWS region is tempting. For instance, it is much easier and cheaper to interact in one ecosystem rather than multiple. However, as rare as it may be, an entire ecosystem could get knocked offline/hacked/etc. As always, it will be interesting to watch the cloud infrastructure competitors duke it out over the next few years & hopefully provide even better solutions that can help websites experience the optimal 100% uptime.

We use XAMPP on a few machines to quickly test some PHP scripts. Upon upgrading to the latest version of XAMPP, everything slowed to a halt. It turns out that the database connection was slowing down the scripts in phpMyAdmin, and our custom scripts. We just added a line to our phpMyAdmin config.inc.php, and adjusted the database connection strings in our custom scripts to fix things.

The line in config.inc.php:

$cfg[‘Servers’][$i][‘host’] = ‘127.0.0.1’;

Our custom scripts were fixed by using 127.0.0.1 instead of localhost.

(On a side note, we had already changed our hosts file & ruled that out as a problem prior to changing these values.)

Last year, we posted about GoDaddy’s alternative to Rackspace’s cloud servers. We delivered a mixed verdict on both products, siding with GoDaddy primarily for price and ease of use, while Rackspace won the reputation, documentation, and API control.

Starting today, however, we can definitely recommend Rackspace over GoDaddy. Not only for their SOPA stance, or hours-long outage a while ago, but due to the fact that they are shutting down the product. It was a great step for beginners into the world of cloud computing, but it seems that their low price and tarnished reputation drove them out of the market. In a memo that we received late last week, the company announced that Cloud Servers would be discontinued, giving customers until mid-April 2013 to move away from the platform.

As a rule, Hollow Developers always maintains our primary servers on one web host in one city, as well as backup servers on a different web host in a different city. Every provider is going to experience downtime, and we want to make sure that we are always online. Look forward to another host vs. Rackspace article in the near future, as we move our servers away from GoDaddy and find a new home.

Here is the bulk of GoDaddy’s memo to existing customers:

Go Daddy appreciates your business with us – we know you have many choices when choosing a business partner online. We continually strive to deliver you the best products and the best support in the industry. After careful review, we have decided that the best way to bring cloud hosting to our customers is by integrating it with our Web Hosting and Virtual Private Server products. As such, we will be discontinuing Cloud Servers as a stand-alone product.

We know you have invested your time building your business on top of our Cloud Servers product, and we want to work with you to take the next step. We will be giving our Cloud Server customers until April 15, 2013 to migrate their data and processes to a new platform.

Our customer care representatives will be reaching out to you over the next week to help you make this transition. We have several alternative products to meet your hosting needs, including:

Google unveiled their ‘next dimension’ of Google Maps earlier this week, and the largest takeaway from their announcement was the inclusion of better 3D modeling in Google Earth. Using measurements taken from airplanes, Google Earth will recreate the terrain and buildings to ensure that all buildings are captured and recreated in the virtual environment. As I said in the previous post, I thought this would be a pretty cool addition, but I wanted to see something that would be relevant to everyone in the world, not just the selected areas where Google planes would fly overhead to collect the measurements. Alas, that’s what we got, but I won’t complain much – it’s still pretty cool, although I doubt it will come to my neck of the woods any time soon.

But, you know what would be even more cool? Some of my Google Maps wishlist items below.

Instant Routing Updates

Make a change to Google Maps in Map Maker (the service that allows users to edit Maps), and have the Maps products update immediately for better routing around traffic or construction. Right now, routing updates can take months to roll out to all Maps products due to the computing power needed to produce directions. Maybe an option to force an update, however?

Recent Satellite Imagery

Have satellite images automatically update when a better image is taken. Right now, it seems this is a manual process and judgment call based on if a newer image is better than the other. I would think that there would be some concrete method of determining this, however, since % cloud cover is already captured on many satellite images, as well as the average resolution of the image. Any time we can get more updated imagery in our hands is good with me. (After thinking about this for a while, I doubt this will ever be in place. Google blurs a lot of high-security areas to prevent the use of the tool for nefarious purposes.)

Google has announced that they will introduce the ‘next dimension’ of Google Maps tomorrow at 12:30 pm ET. The event comes the week before Apple will likely announce that they will start to remove Google Maps from iOS. So Google’s looking to make Apple regret that decision. Much analysis has gone into what this ‘next dimension’ may be, but being a Google Maps addict, I am adding my speculation to the mix.

The Next Dimension – 3D from Street View?

So the next dimension that Google Maps would have in a literal sense would be 3D. Already, you can see “2.5d” buildings in Google Maps, but these are fairly crude gray blocks. Leveraging technology from their Street View cars, it would be nice to supplement these gray blocks with actual Street View images. From my experience, though, it seems that the Street View images identify too many false positive buildings, from parked trucks to signs to people. (Take a look at this by attempting to click on what Street View thinks is a flat surface.) So, while this would be a nice next dimension, I doubt that this is what we will see tomorrow.