Packaging mods is not the funnest part of building any mod. So why should I do it manually? I run Mac OS X which means I have a terminal and can run commands directly to accomplish the packaging process. I just needed to build a script. Easy to do and now its done, so I will just detail out the script.

So, this is some settings. The first tells me where my mods are located. The path after this matches what I have in SVN for my mods. So you can put together an image of what I have setup. The next is an array of disallowed files that we want to ignore when reading directories. The final one is the full path and name to the tar binary.

I custom installed a tar binary since the built in OS X adds resource forks and I did not want to break anything by replacing the built in tar with my own (I doubt it would, but didn’t feel like finding out months later and having to fix it).

// No more changes!!!
Warp_header();

// Package them?
if (isset($_POST[‘package’]))
doPacking();

listMods();
Warp_footer();

This has no explanation really. It is my header, packaging code, most list and footer (to properly close all html tags 😉 ).

// List the mods!
function listMods()
{

global $dir, $disallowed_files;

// Get the mods.
$mods = scandir($dir);

This is the start of my mod listing. Which I globalize the directory and disallowed files. Then I simply perform a scandir to get a listing of all my mods. The next section of code contains html, so I will skip that since it isn’t important.

Now simply put, this code will simply prepare the output by doing simplexml to create an object based on the xml data, which I can then use to get the name of the mod (much easier than reading, and pulling from the file with a regex). Finally I sort the array by the key. Again, more html to ouptut this data. I simply used checkboxes.

function DoPacking()
{

global $dir, $tar_bin;

echo ‘
<div>
Packing…<br />’;

This function does the actual work. For this one I just need to global the directory and tar binary.

$force = isset($_REQUEST[‘force’]) ? true : false;

This simply just allows me to force a mod to be packaged even if it exists for that version. I didn’t need anything complicated as this rarely is needed.

// This just finds what mods we want to package.
$allowed_mods = array();
if (isset($_REQUEST[‘mods’]))

foreach ($_REQUEST[‘mods’] as $in)

$allowed_mods[] = trim($in);

This simply just does a loop to find all mods I want to package. If this was a public script, I would need to validate the input against an array of mods that exist. But since it is just used internally, I didn’t do that. But now we get the the actual work.

// Get em!
$mods = scandir($dir);
foreach ($mods as $mod)
{

global $temp_key;

if (in_array($mod, $disallowed_files))

continue;

if (!empty($mods) && !in_array(strtolower($mod), $allowed_mods))

continue;

We first start by scanning the directory again, removing the files we don’t want, this time we are removing mods we didn’t want to package from the array.

This just puts all files inside each mod folder into an array and removes the files/folders we do not want to package. I use the same structure for all my mods, so I don’t have to worry about individual cases.

I don’t usually update the version in my .xml file, only my readme. So I need to get the latest data from my readme file. This is going to be used to update my version info in multiple places. The next part is more html, so I am skipping it again. It basically is checking for existing versions of missing versions and letting me know.

Now I simply loop through all files, looking for the .xml ones, as these have a version tag in them. Once I located them, I simply update them with the new version. Prior to updating the file, I make sure nothing went wrong.

Now for the actual fun stuff. Prior to packaging the mod, I need to change to the directory. This prevents a path of folders when the mod is unpackaged. Then finally I run the command to package the mod. I have it automatically package it into the releases folder based on the mod name, and its version and explicitly name all files I want to package, thus avoiding disallowed files.

That is all the actual work to handle the packaging. I haven’t tried it yet, but I added code with theoretically should allow this script to work from CLI.

// Not used yet, but can handle cli stuff.
function handle_cli()
{

if (in_array(‘force’, $_SERVER[‘argv’]))

$_REQUEST[‘force’] = true;

foreach ($_SERVER[‘argv’] as $in)
{

if (in_array($in, array(basename(__FILE__), ‘–‘, ‘force’)))

continue;

$_REQUEST[‘mods’][] = trim($in);

}

}

Someday I may actually test the code, but oh well for now. The final bit is more html for the header and footer. So that is all the code I need.

For the SimpleDesk website, I made a very easy to use and very sleek download manager. Complete with branch, version, file and mirror management. Simply put, this thing is very powerful and flexible. While I didn’t add it in, I could easily expand this script to manage multiple pieces of software as well.

I should note that the way I installed mysql (apt-get), a debian.cnf file is created. I haven’t even bothered to see if this file is actually used by ubuntu. But none the less I need to copy it as it contains a mysql user/password for use by the system. Which isn’t really safe considering it is a root account. Setting open_basedir restrictions help with that though. As well in the mysql.conf folder a mysqld_safe_syslog.conf file exists, I don’t use safe mode so I don’t care about it.

Now I won’t lie, at this point something went horribly wrong. I have yet to figure out why. I have done this many times before and never had an issue. After trying everything I could think of to get mysql started, get rid of the errors and even moving it back, I still had no luck. I ended up restarting the entire box and after that things just worked. So I tried again and then everything worked just fine the second time around. I have no clue why it failed the first time.

Just to add a finishing touch, I edited the /home/configs/my.cnf and changed datadir in it to point to /home/data/conf

That takes care of that. Next is to figure out all the configuration files I need to duplicate over for my mail setup. Hopefully after all that, my web site should be able to easily switch from ubuntu to another operating system and be up and running in no time.

A random thought has hit me. Most people try to keep their MySQL user credentials secure. But why? If a server has been setup properly, it becomes a mute point.

The idea occurred me when thinking about opening a sites source code up. If I opened the site up, I could give them access to my settings and configuration files. These files also contain mysql user credentials. So either I attempt to remove those, or I disallow access. However, I then wondered why even worry.

I will use my own site as an example. If I give out my MySQL user credentials to my inactive forum, what good would it do someone? phpMyAdmin is secured behind a HTTP_AUTH page (over SSL) before you can supply the MySQL user credentials. I have configured all my MySQL users to only allow localhost connections, so only connections from my server alone are allowed.
So if somebody had my MySQL user credentials, they would be completely useless. If they managed to exploit the server and upload files that do malicious stuff, they would most likely be able to have that script find and read the settings file. That being if it was somewhere in the open_basedir restrictions for that site. If they managed to exploit the server, they could do more damage then logging into mysql. Although since only I have a login to my site (secured behind SSH), I have very few files that apache can edit or write to that is web accessible. To fix any mysql damage they did, all I need to do is restore all mysql data (users as well) from a backup. File damage is much worse as it is easier to leave a backdoor into the system then.

Although I don’t run any control panel and use phpMyAdmin simply for ease of access, other sites that run admin panels such as cPanel also apply. Unless they have the cPanel login information, the user installed phpMyAdmin for some reason or configured their mysql users to have outside connections, the data is useless. With the exception being if an attacker was able to upload a malicious file

For shared servers, this could be a worry if your MySQL credentials are publicly known and a hacker happens to also have a site on your shared server. So my above points will have little value if your server is shared. Shared servers carry a risk and that risk means attempting to protect all your credentials more heavily, as an attacker could simply be on the same server as you.

Of course this all depends on the server admin and webmaster having properly setup things such as access to phpMyAdmin and other scripts before hand. However I think this still provides a good point that even if MySQL credentials are publicly known, they still don’t offer much.

Microsoft decided it would be a good idea to install a hidden addon to firefox installs for those who have some services installed. The major point being it is a hidden unknown addon that you can not remove yourself. How friendly is that?

Google’s Chrome does the same thing. After first running it, I discovered that it installed a hidden service to my user’s Application Support (Mac OS X) folder. I ran a few commands as root to remove the file and chmoded it to “0000”. I also removed Chrome as well and checked all files it modified.

Other than being totally shocked that Chrome is installing a service without my permission, I am questioning continuing to use any of google applications now. This would of all been ok, if Google Chrome had asked for permission to install a service that supposedly “checks for updates”. Of course I wouldn’t of allowed it anyways, I have enough services running on my poor laptop and I don’t need to add a useless one.

Hopefully both Microsoft and Google get it straight. Although I can’t say much about Apple who forces you to install QuickTime and Apple Update on windows. So maybe all three need to get a clue. I want to know what you are doing to my system. Keeping this up will only make me move to full time linux usage more and more.

As a quick end note. Procrastination paid off, as I haven’t run windows updates in about 2 weeks. Just goes to show that sometimes procrastinating can be a good thing.

My current host has a unique feature in which it allows me to setup virtual machines easily. Since that is easily possible, I may want to someday switch to another operating system. So I wanted to split all my /home and configuration files out onto a separate drive. Which is entirely possible with my host.

The nice thing about linux systems is since they operate on open source software, things like configurations and setting things up are becoming less of a problem. So If I set up my files correctly and use some correct symlinks, I could easily switch my operating system without missing a beat.

I will avoid discussing the details of getting the other drive setup on my machine. However, getting to working properly does take a little bit of work, all of which is easy.

Firstly, after I made sure the new drive exists in /dev, I simply created a folder to where I would mount the files.

$ mkdir /home-new

Now the directory exists, I simply just mount the drive to the directory.

$ mount /dev/xvdc /home-new

Doing some basic commands, I tested to ensure that the drive works and is functioning properly. The next step involves copying files over. However I had my site setup with permissions already and copying them would result in them being owned by root again. Luckily the copy command has a argument that allows us to preserve that.

$ cp -Rp /home/* /home-new

Once that completed, I ensured that the files all worked on the new drive. Next was to edit my /etc/fstab so the drive would mount correctly on reboots. Simply put, I just copied the one for my root drive, changed the /dev device to the correct drive and the mount point to /home. Just incase something went wrong, I shut off apache and moved /home to /home-old with a move command.

Now, I could of easily umount the /home-new drive and remount it on /home. But just to ensure everything worked, I issued a reboot command and waited for my server to reboot. After the reboot, I was able to see my site working again. However I was not done yet. My apache configuration files are still on the main drive. An easy way for me to get around this is moving all my virtual host configuration files to my home folder and creating a symlink to them.

This completes the move of my apache configs. I modified the default configuration and have it containing things like port and other apache configuration changes. I just simply repeated this for other configurations I changed and wish to have them transfer if I switch operating systems.

The only thing left to do is change where my mysql data is being stored. Although I will work on not breaking that the first time around some other day 🙂

WordPress by default doesn’t protect its wp-includes and wp-content folders. While WordPress doesn’t do stupid things in most of these files, they still don’t do a simple defined check to ensure we came from an a privileged file. SMF does this and it prevents direct loading of any of the Source files.

To get around this is not as simple as it should be. To start with, I added a “.htaccess” to my “wp-includes” folder with the following contents.

Deny From All

However, that broke the built in rich editor in WordPress. So, now to edit “wp-admin/includes/mainifest.php” and change the following.

All I did was change .php to .js since after reading the directory I came to figure out the .php version is just a compressed version. I removed the “$zip&amp;” part as well since it didn’t make sense to keep it anymore. the “c” argument just tells it whether to compress or not. So this is my resulting change

However, since I was loading some content from my includes folder now, a tweak needed done to my “.htaccess”

<Files *.php>

Order Deny,Allow

Deny from all

Allow from localhost

</Files>

Simply put, that will deny access to all php files in my “wp-includes” folder. That worked and a simple duplication of the file to my “wp-content” folder produced the same results. However, I still wasn’t done. A simple .htaccess password protected directory for my “wp-admin’ would offer a very basic block to help prevent unauthorized access. Although it isn’t using a very strong password or username on it, it still prevents the fly-by attacks.

Now I just simply needed to populate that file. Since I have apache installed on my laptop, I simply opened Terminal and ran “htpasswd -n username” and gave it a password at the prompts. Then I simply just copied the line from the window to my .access file and saved. Everything works and my entire wp-admin folder is protected from unauthorized web access.

However, “wp-login.php” contains three calls to css files in the wp-admin folder. A “login.css”, “colors-fresh.css” and “logo-login.gif”. Simply copying those three files to my theme is half the problem resolved. Then just modifying wp-login.php to directly call those files rather than the functions that previously called them. “login.css” needs to be modified and the path to the logo-login.gif file needed adjusted.

When I started my site, I knew that I would rarely see visitors. It is more of a personal test site then it is for anything else. I recently decided to get rid of my own site and get a blog. Mostly because my site isn’t really for communication amongst many individuals, rather that its for my own discussions and fun. So in this aspect, a blog makes more sense doesn’t it.

I wanted to setup SVN on my server. Why you ask? Well just because I can really. The most important reason is to get my mods and other files into a repository that would also act as a backup. I set it up on my site as I never saw the point in keeping on my own system.

Luckily, like most linux systems, on Ubuntu I can do this without breaking a sweat. I won’t go into why I am running ubuntu. I just felt like using Ubuntu as my server software of choice. Although I plan on looking into Debian.

When I was going to setup, SVN I decided to set it up with dav. Mostly because it would be easier for me to give out urls to the svn.

$ apt-get install subversion libapache2-svn

After that quickly ran and I accepted it to download the files, I was almost done. I setup a svn repository and did an initial commit into it. Although I had options for how to setup access, since I would be the only one committing to it, I just setup the very basic setup for access.

I had to setup my self signed SSL certificate so I could continue setting up svn. That is as simple as running the openssl command with the correct options. I did a google search since I was too lazy to read the manual.