I find the excuse that the bug is just for your small “edge case” as an explanation for why it won’t be fixed annoying.

I have found “edge cases” to actually mean we don’t want to fix it. Often the issue isn’t needing some special code to deal with an “edge case” it is the coding was done poorly and breaks in many different “edge cases.” It isn’t that those edge cases need to be coded for. It is that the code should have been written in a robust way that didn’t break for lots of “edge cases” but the excuse given for not fixing the fundamental coding fragility is the bugs found are just “edge cases.”

There are real instances where “edge cases” is a justifiable excuse. For example, adding in special code to deal with some odd category of users that just isn’t worth the cost.

But I just am so tired of fragile coding being excused as if breaking in lots of “edge cases” is perfectly acceptable when the only reason it fails is because the code is fragile instead of being built in a robust way to begin with. The issue isn’t that you have some special edge case that you want special coding for the issue is the code was written in an unnecessarily fragile way that makes it not work unless you follow a list of acceptable use cases.

Code should avoid adding in requirements that are not necessary. The edge case excuse I see used far more due to requirements that the code added which never should have existed instead of actually being an edge case that would require special code. For example, most web pages don’t require javascript (or IE, or flash, or downloading 5 mb of code to view simple text…) to do what should be done (display text, display images…) but some sites code their page to break if javascript… isn’t used by the user. Seeing this as an “edge case” issue missing the point of creating code that has superfluous requirements for the user that create “edge case” failures where they shouldn’t exist but for poor coding practices. In some cases jasvascript is required to do fancy things that are useful, in which case gracefully degrading and potentially not working fully is acceptable.

To manually run cron tasks you can use the run-parts command in Linux.

So to run your cron-weekly, for example, to test that a fix you just made runs without error (this is what I just did, in fact)

run-parts /etc/cron-weekly

run-parts will run all the executables in a directory (you must point at the directory). So if you have several files in cron-weekly to run, you can’t just point to one of the files.

You may run into environmental differences running the script as a different user than the cron test runs at, so you can run as that user if needed. You need to be aware this is a quick and simple way of testing part of the process but it doesn’t do a perfect job of testing if it works as a cron task. But it will let you catch some failures quickly and fix them in time for the actual cron task to run. So do check that the everything works after the real cron job runs.

This is just the kind of thing I said I would put in this blog. Simple stuff but things I forget – so I put it here to remember and maybe help out others, like me, that need really basic tips.

If you have a cron task item (or have setup the whole task this way) that is just a script and you just want to test that 1 item you may run the script directly. For example (for a Linux shell script):

The last programme I wrote was a Sudoku solver in C++ several years ago, so I’m out of date. My children are in IT, two of them – both graduated from MIT. One of them browsed a book and said, “Here, read this”. It said “Haskell – learn you a Haskell for great good”, and one day that will be my retirement reading.

This quote was by Prime Minister Lee Hsien Loong of Singapore in April 2015. I must say I think the Western governments could be more effective with more scientists, engineers and coders in positions of power

His father was the first and long time Prime Minister of Singapore – Lee Kuan Yew

Another quote from the speech

40 years ago, after doing a math degree, I went on to study computer science, on my father’s advice. He said there is a future in that, and he was right. So for the Smart Nation Programme Office, I have put Minister Vivian Balakrishnan in charge, reporting to me. Vivian is both a hacker and a dabbler – He used to be an eye surgeon but since he does not get to operate on eyes nowadays, he dabbles in building simple robots, assembling watches, wireless devices and programming apps. His day job is to be the Minister for the Environment and Water Resources, and so when he builds apps, he uses the real time APIs generated by the Ministry.

It is useful to have governments around the world with different priorities. While the USA has turned against science and engineering in many ways others can pick up the slack. The USA had for decades been firmly in the position of promoting science and engineering. And the results of that are still blessing the USA with economic benefits including the wonderful results of silicon valley and far flung software development throughout the country.

Singapore can improve but they sure do many things well. And the sense to continue supporting science, engineering and emerging technology will benefit them economically as we move into a world where those fields only grow in importance.

Apollo 13 is a great movie on hacking. Hacking is applying intelligence to systems (including computer systems) to achieve a goal.

That can be done by criminals or devious people but it doesn’t have to be. It is a bit annoying that some people equating hacking only with criminal behavior.

The hacking culture is much more about figuring out ways to make technology work for people than about criminals. We shouldn’t let a small sub-set of hackers defile the term.

The Apollo 13 command module in which the astronauts splashed down into the Pacific Ocean. by HrAtsuo, via Wikimedia Commons.

When the oxygen tank exploded, Commander Jim Lovell made the famous statement: “Houston, we’ve had a problem.” The engineers on the ground and astronauts had to devise solutions to several very difficult problems and execute them quickly in order to return the damaged spacecraft to earth.

The amazing hacking done by the engineers (including the astronauts) at NASA to create a solution to the serious problems faced by Apollo 13 allowed the astronauts to return home safely. Without the amazing hacking done by those government employees the astronauts would have died.

It is also good to remind people, government workers do amazing things. Sure government workers can also harm society with bad work or by implementing bad policy. But it isn’t the fact that they work for the government that defines the value of the work they do.

I am more often frustrated by Google the last few years that pleased with them. But they do still provide some pretty awesome tools. For example, Chrome Remote Desktop lets you access a computer over the internet (and lets you to allow another user to access your computer securely over the internet).

Chrome Remote Desktop allows users to remotely access another computer through Chrome browser or a Chromebook. Computers can be made available on an short-term basis for scenarios such as ad hoc remote support, or on a more long-term basis for remote access to your applications and files.

Chrome Remote Desktop is fully cross-platform. Provide remote assistance to Windows, Mac and Linux users, or access your Windows (XP and above) and Mac (OS X 10.6 and above) desktops at any time, all from the Chrome browser on virtually any device, including Chromebooks (including Android phones and iPhones). The iPhone app is new.

Some users worry about installing such an app given all the spying and hacking scandals. That is not a completely crazy worry. Google, and others, have been taking advantage of weak user control (and even bugs and work arounds to avoid stated user preferences) to track users and use that information to make money selling ads. With many cool and useful tools there are risk of them being misused. And practices of governments and huge corporations have been so egregious to give a sensible person pause. Still in the right situations this is a pretty cool looking tool (similar things exist but the combination of price [this being free] and simplicity make this interesting).

Sadly one of the hassles in managing your own WordPress blog is dealing with people that use your blog to serve spam content. These hacks can insert spam links into your pages and posts or create spam directories that are completely their own content on your domain.

There are many issues to deal with in re-establishing control of your server; but that isn’t the scope of this post.

This is just a tips if you are troubleshooting to try and determine what is going on. Often your server has been hacked to allow uploaded php pages to be added or for WordPress php files to be edited.

One way to track down if the files have been changed or new ones added is to compare the WordPress files on your server to the current files for a fresh WordPress install. This assumes your blog is using the current version, which hopefully it is because on the big improvement WordPress made is to make those updates automatic. That greatly reduces the chance to have WordPress be the vector to infecting your server. If you were using a older version then just compare to the field for that version from the WordPress server.

If you don’t have a current backup I would make a backup before I tried this. Obviously, don’t make any deletions or changes to your server unless you understand what you are doing. You can create big problems for yourself.

You can use the diff command to view the difference between WordPress on your sever and the fresh install from WordPress. I install the new WordPress in a new directory outside public_html. At the cli on a Ubuntu/Linux server:

Finding the right place to host your content is important. Thankfully their are several excellent providers. For virtual private servers (one server shared with multiple virtual servers) Linode and DigitalOcean. And there are lots of good choices but those two are widely appreciated for excellent service at a good price.

AWS ec2 (the Amazon elastic cloud) is not great for minimal hosting in my opinion – it adds extra complexity and is likely more expensive. But it is a great solution when you have the resources to manage it and you have significantly variable demand. Because of the ability to add capacity on the fly as you need it you can maintain a low baseline and add capacity only as needed and drop that extra capacity as soon as it isn’t needed.

Rackspace is another good option for hosting. Rackspace and AWS are often used for very large applications and sites but Linode and Digital Ocean also can serve those needs and provide similar options to add capacity on the fly.

All of these options require you to manage your server (which may well be a virtual server – that is just a portion of a actual physical server that you control).

Rackspace also offers co-location where your physical server is put in their network operation center with electricity; cooling; network and internet connections; and physical security managed by them and the server managed by you.

As colocation has evolved what is included and to what level things like physical security and redundancy are dealt with have evolved. It has become quite complex to understand all the options for those organizations that need more than a simple virtual private server. As often happens when their is a business need, people offer solutions. And there are companies that specialize in helping you find the best colocation options for your needs.

Today the cloud options have led many organizations to eliminate (or greatly reduce) there own network operations centers and colocation needs. But cloud options are not always the right choice. And for some needs cloud options are not appropriate yet (mainly due to security or legal issues steaming from security concerns).

Managing your own servers with a colocation arrangement can be significantly cheaper than cloud hosting options (especially if you don’t need to massively increase capacity to deal with short term bursts of demand). Of course, technology continues to change so quickly it is hard to predict what the future will bring.

Service quality is absolutely critical for colocation. While saving money is important, the reason colocation was selected (over virtual private servers or the cloud) is normally how critical the function was. Using experts to help sort through the options and assure the quality of service of provides is wise.

Curious Cat uses OpenStreetMaps and umap to create my own maps. Then I can embed them on my sites, for example here for Chiang Mai Thailand.

For some reason when you use umap and select the embed option it gives you code that leaves off the zooming altogether. It is an easy fix (and I imagine the code will be fixed making this simple advice obsolete). But for now this is all you need to do.

The url umap adjusts as you zoom the map in your browswer. For example, going to

You can also see the link to “See full screen” isn’t using the zoom settings. If you wanted that link to a zoom setting of your choice you can set that also.

Note the view on your screen and the embedded map isn’t going to be identical. The map will likely cut off some of what you see (due to the sizing of the embedded map). You can also adjust the sizing of the embedded map by adjusting the height (they default to 300px, but you can make it 400px or whatever you want).