Author: Chris Malpass

I’ve been on the Amazon bandwagon for a long time. I love to have any excuse to not to leave the house, and well, Amazon quite literally delivers on that.

I was gifted the Amazon Echo at Christmas and lately I’ve been curious about what else it can do beyond play music, make lists, and answer my random queries. It’s great as it is, but I know there is much more to it, so I embarked on a budget-friendly quest to turn my disconnected home into a “smart home” using the Echo and other devices that integrate with it.

My first foray into the “smart home” market was with a handful of the TP-Link Wi-Fi Smart Plugs. I picked up one as a daily deal on Amazon a couple of weeks ago. It arrived quickly and I un-boxed and plugged in the unit immediately.

Wi-Fi Smart Plugs allow you to control power from electrical outlets remotely in your home or even from anywhere in the world that you have an internet connection.

The first thing I had to do was download the Kasa mobile app on Android. This is the TP-Link app that helps to configure and control these devices. The setup process is pretty simple. Just fire up the app, tell it what you are setting up, give it your wi-fi credentials, and it does the rest.

You’re given the option to name the new wi-fi smart plug and select an icon for it so that’s nice. You also have the option to allow the wi-fi smart plug to be controlled when you’re not home vs only when you’re home and on your local wi-fi network.

Within 5-10 minutes of receiving the package I was in business and started playing with the remote features. I checked out the app and saw that there were timer modes and scheduling as well.

The literal and figurative light bulbs were going off and I realized all of the purposes this could serve to make my life a little simpler. I could remotely control household lights, electronics, the outdoor Christmas lights (with suitable and safe placement), the Christmas tree, plus being able to mimic natural household lighting conditions when I’m away from home.

The setup instructions for allowing Alexa to control your TP-Link plugs is pretty simple. You can enable remote control on the devices you want Alexa to have access to, then use your Alexa app to ‘discover’ those devices on the same network. You can then ask Alexa to turn on/off those devices by name or group them in the Alexa app and have Alexa turn off groups of smart devices at the same time.

Me: “Alexa, turn off the Living Room”

Alexa: “Okay”

After hearing that I bought 4 more of them.

There is room for improvement, though, but it is a pretty minimal gripe. The actual hardware is clunky when plugged in and obstructs anything but very slim plugs being plugged in adjacent to it. You have to really force it if you want two things plugged in at the same time when one of these is in your outlet. The TP-Link HS100 Smart Plugs really are entry-level devices, so my needs are not complex and I can live with “less than perfect”.

I was pretty skeptical about how easy and ‘smart’ this would be…but I think I might just be building out my smart(er) home sooner than expected.

Front of Package for TP-LINK Wi-Fi Smart Plug (HS100)

Rear of Packaging for Front of Package for TP-LINK Wi-Fi Smart Plug (HS100)

Opening Packaging for Package for TP-LINK Wi-Fi Smart Plug (HS100)

Contents of Box TP-LINK Wi-Fi Smart Plug

Instruction Card TP-LINK Wi-Fi Smart Plug

Instruction Card TP-LINK Wi-Fi Smart Plug

Contents of Box TP-LINK Wi-Fi Smart Plug

Side View TP-LINK Wi-Fi Smart Plug

Plugged In TP-LINK Wi-Fi Smart Plug

Kasa TP-Link App New Device Screen

Kasa TP-Link App New Device Screen

Kasa TP-Link App New Device Screen

Kasa App TP-Link remote control screen

Note: I was not compensated in any way for this review. These are my own thoughts, experiences, and opinions.

If you’re a photographer trying to order items from Bay Photo through their ROES Emerge software platform I’m sure you’ve been frustrated with it at least once. Newer cameras like what we’re all using now are generating even larger images of 30MP+ and it seems that ROES Emerge can’t handle the images properly. I design albums for my clients using the ROES Emerge software and have had a heap of trouble trying to get it to work properly, even after trying to reach out to Bay Photo for support.

As a photographer who also happens to be a little technical, I decided to start monitoring the log file associated with the application. In my case it was in: C:\Users\Username\.BayPhotoEmerge\roeslog.log

So I fired up the application and tried to open my saved album that continuously failed to transmit to Bay Photo for processing. The application would hang and crash every single time. I tracked through the log as it updated and found that I kept seeing:Exception in thread “AWT-EventQueue-2” java.lang.OutOfMemoryError: Java heap space

So I got curious and figured that the application was indeed overflowing the memory allocation that Java was allocating it. How do I fix that? Well I ran across this here which seemed to offer some insight.

Now I have to find the config that is being used for my Bay Photo ROES Emerge to see if I can manually enter the amount of memory that can be allocated to the application.

To do this I right clicked on the Bay Photo Emerge shortcut on my desktop and go to the properties item.

Go to the ‘shortcut’ tab

find the entry for ‘target’

at the right side of that entry should be a file location in double quotes like “C:\Users\Username\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\47\2bfd02ef-7d42d6b5” (note that the actual filename may vary on your system)

Copy this file path to your clipboard and open notepad. Once notepad is open go to the file menu, select ‘open’ and paste the path from your clipboard into the filename field then click ‘open’

Once opened you’ll see a bunch of stuff that probably doesn’t make sense. Look for this (or something really similar):<j2se version=”1.6+” max-heap-size=”1024M”/>

This is the entry that tells Java how much RAM should be allocated for the application. If it is habitually hanging up and crashing then this limit is too low. If you know for sure you have plenty of RAM to spare (like I do) you can change this to something greater than it is currently.

After that change I saved the file, exited notepad, and restarted Bay Roes Emerge.

I retried the operations that continuously failed earlier and it seemed to work as expected.

I’m not sure why this was such a hassle to get to this point but given the fact that I’m not afraid to dig around helped a lot. If I wasn’t apt to explore I would probably be frustrated for another week of back and forth with support. Hopefully someone else stumbles on this post and it is helpful to them.

Recently came across the need to be able to push a git repository directly to a web server and have the repository’s changes automatically reflected in the server’s web root. The aim was to simply push the repo out to the web server and have it automatically apply the changes to the web site’s code.

There are several ways of doing this and this is not necessarily the most seamless but it doesn’t require as much pre-configuration to get up and running as other methods.

I found a few tutorials and put together my own here. For this purpose I’m using a CentOS 6 Linux remote host accessed via ssh via Putty on a Windows Desktop. The instructions follow the assumption that your remote server’s web root is empty or you have backed it up appropriately before proceeding. I’m also assuming you have a repo somewhere else that you can push to the web server remotely.

Connect to your web server using your ssh client of choice. Once connected you should be prompted to log into your server with credentials.

Verify that git is installed on your host machine. If it isn’t or you’re not sure you can install it using:yum install git

Next you will need to create a bare local git repository. I’m making mine under the current user’s home directory. You can put it wherever you like but this seems as good a place as any to me:mkdir repo.git && cd repo.gitgit init –bare

Now we have a bare git repo. Now we navigate into it and set up the hook that will link it up to our web root directory (or other web facing sub directory):cd hookscat > post-receive

This should give you a blank line where you can type in some stuff. Type this in:#!/bin/shgit –work-tree=/path/to/web/directorygoeshere –git-dir=/path/to/gitrepo/goeshere.git checkout -f

Hit CTRL+D to save the stuff you just entered. Now git will checkout your repo to the web directory you specified when you push changes to it.

Now give the hook you just created the ability to execute:chmod +x post-receive

Now you have a working git repo on your host that you can push to. I have added this remote repo as a remote in my local windows msysgit GUI. You can do the same thing using whichever method you’re using to interact with git.

Now I can simply make changes locally, commit them on my local environment, then push to the remote web server which will check out the repo to the web directory specified in step 5 above.

I recently did a project that required Directory Press for a vendor directory. The DIrectory press system sends out emails for various site events but was lacking in one area. The site required that all listings that are submitted must be approved and published be an administrator. This also required an email to be sent to the post author which is something that Directory Press did not facilitate.

In order to sent the post author a message when their directory listing was published I had to add this block to the functions.php in the theme.

I have written a very simple script to perform a useful function for me. I figured I’d share it so that others can make use of it.

If you have a CSV file with the first row containing headers and the subsequent rows containing data that you wish to update along with an ID column, then this will work for you without any issues. If your primary key is different or you wish to match other criteria you may need to adjust the script a bit. This does not facilitate upload but only uses a csv file residing on the same directory as the script.

I’ve got a lot of data to back up. Lots. I have tried several services and were unhappy with the pace of the backups as well as the lack of reliability.

I landed on Carbonite as my next product to try out. With the volume of data I had to back up, I gave it a try. Initial backups were going at a steady pace. Faster than I expected, really. But after the initial 30 days or so it slowed to a crawl. It would have taken about 18 months to get my initial backup completed and by that time I would have double the data. It was never really going to be able to keep pace and that was a huge deal breaker.

In addition to the slow pace of the initial backup, the actual backup engine software failed to work more than a dozen times, resulting in weeks of absolutely no backups. I had to keep restarting my computer, re-installing the software, and hoping it would just start working. At one point it told me it was going to have to start the backup over again. That was the icing on the cake so I gave up on it. I couldn’t trust it with my data.

This service just wasn’t going to work. Carbonite isn’t a good solution. I emailed support and asked to cancel the account and have a refund of my remaining balance. A bit later I get a response that I need to call them during certain hours to talk to them before I can move forward. I let some time pass before I tried again.

I emailed support once more with the same request: cancel and refund. Same line. They wanted me to call them during certain hours. I responded to let them know that I would not be calling them and that this was a simple request. After a few more email exchanges and confirmations of confirmations, I was finally done with the service. Almost. At no point did they have any intention of giving me a refund on my year subscription that was used for 8 weeks. And they still haven’t given me a refund and only pointed me towards their policy on refunds – they don’t give them.

I cannot recommend Carbonite Online Backup for any real-world use (not to mention, bad business). It would have been one thing if the service just couldn’t perform, but it’s another thing entirely that they refuse to give refunds when someone is completely unsatisfied with the service.

I regularly process presentation recordings and encode them into streaming formats. I have repeatedly run into the problem of the near impossibility of converting the gotomeeting WMV format. GoToMeeting seems to be able to record presentations in two variations: the first being their own proprietary WMV format and the second is standard Windows WMV format.

The latter is what is needed to easily convert the video file into various other formats, FLV included.

Take these steps to convert a given file from the GoToMeeting codec to a standard WMV.

Now you want to locate the g2mtranscoder.exe and g2m.dll files — these are what actually do the conversion for you.

Once you’ve located them, copy them somewhere that you can access easily. Take note of the full path of the files (ex. C:\Program Files\Citrix\GoToMeeting\g2mtranscoder.exe)

Locate the video file you want to convert and take note of that file’s path (ex C:\My Documents\Videos\videofile.wmv)

If you have Windows XP go to your start menu and then open the ‘run’ menu and type in “CMD” (without the quotes). If you have Windows Vista or 7 then go to your start menu and type “CMD” (without the quotes) into the search/run area at the bottom of the menu.

Now you’ll have a command window where you will call up your g2mtranscoder using the path you took note of previously and the path of the video file:
Your command to execute should look something like this:
C:\Program Files\Citrix\GoToMeeting\g2mtranscoder.exe source=C:\My Documents\Videos\videofile.wmv

This will run the g2mtranscoder.exe and use the video file as the source. Of course the names and locations of each of these may change depending on your system. It will overwrite the old G2M WMV with the new plain Windows WMV.

You can now freely convert this WMV to other formats without the difficulty of G2m codec.

You can check and see if it is complete my using Windows Task Manager to see if the process is still there. You can check task manager to see if the process is still running in the ‘processes’ tab. You’ll see the name of the exe in the list. If it shows up it is still running. It takes longer depending on the length of the video file.How to bring up task manager:
Press CTRL+ALT+DELETE click ‘Task Manager.
OR
Right-click an empty part of the taskbar and click ‘Task Manager’.

I’m working with a large dataset. One of the columns is set as an INT(8) but not all of the values are 8 digits. I need to run a query that will sum values and group by the first two digits in that INT(8) column. This presents a potential problem since it may end up grouping incorrectly since I need everything to be grouped based off of 8-digit numbers. How do you make these smaller numbers into 8-digits?

Simple. There are two ways.

Alter the table itself using the ZEROFILL in MySQL. This will add 0s to the left of your values up to the max number of digits defined for that column:

I was following the instructions here to install memcache on a CentOS 6 server that I’m currently configuring. I was able to install the base memcache rpm, but had trouble when installing the PECL extension for PHP.

The first error I got had something to do with not being able to phpize the script.

So, I ran the following command to install the php development package:

yum install php-devel

Then I re-ran the PECL memcached install and came across this message:

checking for the location of zlib… configure: error: memcache support requires ZLIB. Use –with-zlib-dir=<DIR>

And finally found a solution for this by running this command to install the zlib-devel package:

yum install zlib-devel

Now you should be able to successfully install and add the memcache extension to your php.ini. Follow the instructions linked above for more information.