Friday, July 17, 2009

This blog reported on July 7 that the Lonelygirl15 and Katemodern websites on lg15.com had been down and were still not back up. Lonelygirl15 Creator Miles Beckett commented on July 8 that they were "looking into this."

I've said this before in chat, but since not everyone goes there, here's my view on the tech side of things:

- We can still access the sites themselves to see the error. That tells us the entire server is not down. If the software still works, but the content can't be accessed, in a CMS like Wordpress, that usually points to a database problem.

- The message claims "Sorry folks, due to large traffic WordPress is having some issues."; if you google that error, you will notice that it doesn't really appear "in the wild" - it's not a native Wordpress error. It seems to be a custom message written for the site - which means we cannot rely on it to be technically true. Interestingly, however, large traffic can mean maxing out the number of database connections, leading to trouble displaying content. In other words: It is at least a possibility this message appears in case of database errors.

- If you go to the old forums, you'll get a rather clear error message for a change: "Can't connect to MySQL server on 'data-back'".For those who don't know, MySQL is the database system most free software on the web, like Wordpress or phpBB, runs on.

So, in summary: We have a situation which looks like a database error, one error message that could mask a database error, and one error message that says there's a database error.

I think it's pretty clear where the issue is.

Why am I telling you this? Because solving this problem should be about as difficult as fixing the header link.

If the database server itself is still running, it should require only a single command: "/etc/init.d/mysqld restart"Even if their server is set up badly, it should be nothing but "mysqladmin shutdown && mysqld_safe" (+arguments).I myself have done the former countless times, I assure you, it really is that simple.

If the database server itself crashed, then it's even easier - just restart the damn server. If they don't have access to it from the outside, just call the datacenter and they'll reboot it. Even if they're a low-priority customer with no special service plans, that should not take more than a few hours.

Moral of the story:As usual, there are two options:1. The situation is simple, but the Cs simply don't give a shit.Or 2. There's shit happening (like wiped out databases, lacking credentials, money running out) and they don't have the guts to tell us.

Either way reflects poorly on them.

(Oh, and in case you didn't figure: No, it doesn't take 10 days to investigate and work around this problem.)

In related news, as soon as the bot greenlights v18 on Safari, it's gonna be released. Maybe Miles is still too busy fixing that header link to restart his MySQL server?

Thanks for the analysis, Renegade. I suspected it was yet another simple fix that had the mad tech skillz of the EQAL staff baffled yet again :)

@Modelmotion: I would be more specific and postulate that the Gayliens have probably kidnapped Miles. One shudders at the horrors he is being subjected to that would keep him from paying attention to his FLAGSHIP website. I can think of no other reason why they would ignore such huge errors.