I recently got myself a smart watch, a Weloop Tommy from China, which seems like a really good Pebble clone for less than half the price. Before going into the features, let’s get some photos first.

So I’ve been wearing the watch for 3 entire days at this point, started off fully charged, as I had charged it to 100% and left it on the charger for another hour, and the battery is currently at 72%. Have had a decent amount of notifications coming in, which could be part of the reason the battery is going down so fast, but the other possible reason is that the chip is still calibrating the battery. And given that at this point it is expected to take at least another 7 days to drain all the battery, I doubt I can get an accurate picture of the battery life anytime soon. Which I believe is a good thing given that most smart watches these days have batteries that last less than 1 week.

So as of now, the Android app has 3 main features, the camera feature, notification forwarding feature, as well as firmware update feature. The WeLoop team is saying that they will have more features such as watchfaces in around 1 month’s time as from this tweet below.

@laurenceputra Thanks for your question. New watchfaces will be upload in next official version. 1 month at the soonest.

While the current Android app doesn’t have a wealth of features, it is enough to do most of the stuff I want it to, although I would definitely like to have the fitness feature and additional watchfaces really soon.

As for the watch itself, I seem to find that while the software is relatively stable, there are times when it crashes, and removes all the data inside, which is basically all your fitness data. In my 3 days wearing it, it has crashed 2 times. The silver lining though is that even though it crashes, it doesn’t brick the watch, and you don’t have to wait for the battery to run out(that would be a nightmare) before you can reset it.

As of now, the watch allows you to take photos with your phone, and it does this really neat thing where when you switch to the camera interface on the WeLoop app on the phone, you get the camera controls on the watch. The watch also has a fitness tracker that basically tracks the number of steps you have taken, as well as distance. I did some (non-scientific) experiments with this, and the data seems to come within 10m of what google maps gives me for a 380m distance. On top of that, the watch is also able to control your music player at this point, so you can play/pause music, as well as skip to the next track as well as go to the previous track too.

While this set of current features might not seem to be alot, it’s actually pretty decent, and enough for most daily uses. Looking forward to when they update the app and firmware, and will post more updates then.

Meetup 2 was held on 19th March at Plugin@Blk71, and saw James Tan from MongoDB, and Khang Toh from Picocandy coming down to speak about MongoDB in production.

This being the first meetup at Plugin, a hotbed for startups in Singapore, saw a fair bit of people who had never use mongo before, and a couple who had only use mongo in toy apps before. As a result, James spent more time covering on the basics of MongoDB, and how a production system should look like.

After James’ talk, we went outside for pizza, which was kindly sponsored by MongoDB, and started mingling around with one another. After pizzas, we went back in to continue with Khang’s talk.

Khang gave a short talk about how he built a scalable resumable file upload server using Tus.io and MongoDB, which in my opinion, was a rather interesting way to handle this problem in a scalable way. His slides can be found below.

So I posted about this before, except it was to use your own router instead of using the crappy gateway that Starhub provides. My router was getting old, despite it still being better than Starhub’s gateway, it was still rather slow. So I got a ASUS AC66. Turns out, you can make your ASUS router spoof itself as Starhub’s gateway, and instead of having 3 devices (the ONT, Starhub Gateway, ASUS router), you can reduce it to 2.

So you had a really awesome router, that had no issue with delivering a working signal into your room, or where ever your computer is. It has been working for a long time.

And one day, Starhub turns up saying that fibre is better, and they have a really awesome home gateway that they will give you for free, and you believed that crap. And you upgrade. And now, your home gateway disconnects every couple of minutes, and you can’t even achieve the speeds you used to, and the really awesome optical fiber box refuses to let you connect your old router (It’s the VLAN stuff that they have set up).

Had a chat with Rahul to try to figure out how I should get around this problem, because it’s really a pain in the ass, and decided to blog about it, simply because of the lack of documentation online.

So if you have an old router previously that was working perfectly, check to see if it’s using 192.168.0.1 as it’s IP. If it is, change the home gateway’s one to 192.168.1.1, with 255.255.0.0 as the subnet, so that the home gateway and your home router doesn’t conflict.

Next, connect to your home gateway via a ethernet cable, and deactivate the wireless (prevent interference with your router’s wireless).

Then, connect your router to the home gateway.

Next up, under Status->Lan Clients, check the IP address of your router.

Go to Advanced->DMZ, enable it, select the internet connection for the WAN, and put in the router’s IP Address as the host.

And you can now connect to your old router and have a functioning wireless network in your house that doesn’t disconnect regularly.

4 (or 3) years ago, I started thinking about hosting my blog, and using that as a motivation to pick up web programming. I bought the cheapest plan over at SingaporeHost.sg. 5GB disk space for SGD$8 a month. In fact, the earliest version of this blog was hosted there. After looking around, due to the fact that I decided I need to learn more than how to do programming in PHP, coupled with the fact that AWS had their 1 year free trial, I jumped over. Little did I know that their instances (micro-instance) had CPU stolen from them all the time. On top of that, the filesystem IO was starting to cost money, and there were days where I had around 50 cents worth of EBS IO. That was when I realised AWS wasn’t that good a deal. Then I jumped to Webfaction and it was really good. But I missed being root, and so I got another machine at AlienVPS. Was happy with their price points ($15/year, $4/month), and I got one of each VPS. They were using OpenVZ, and while that meant that some stuff required tricks to get around, you could get around it. Really slow customer support, but I reckoned with their price points, I shouldn’t expect too much.

Then came DigitalOcean. $5/month on KVM VPS with SSD for storage. It was the dream VPS service. Oh, and it’s on Tier 1 Bandwidth as well. And despite their cheap price, their customer service has an RTT of 10 minutes. Tried it on various occasions throughout the day, it has always been 10mins. And they even teach you how to set up VPN on their servers. What more can one ask for?

As of now, I have set up OpenVPN plus a couple of sites over to DigitalOcean, and I’m really glad I did so. And their new datacenter location in San Fran has an RTT of 200ms to SG, couldn’t have been happier.

However, despite AlienVPS looking better on network end, they use OpenVZ to virtualise. Result is that you are unable to create you own swap partition, and some OS modules are not available [Not exactly a deal breaker as there are ways to get around it].

DigitalOcean use KVM (QEMU) and you can pretty much do most stuff with it, BUT you get only 1 core, and high RTT from SG [That is VERY bad if you are using it for regular web surfing].

Personally I’ve tried both, and I would say that AlienVPS in terms of performance is way better. However for the cheaper option, you may run into memory issues because the amount of memory is really lacking. Now I have set up the $4/month AlienVPS server with OpenVPN, and streaming videos from Hulu is pretty much smooth. Hulu on DigitalOcean (New York) is decent, but may lag at times. That said, DigitalOcean uses SSD, and if that’s what you are looking for (DB server and stuff), it could fit the use case of a cheap DB server too.

Got a 128GB mSATA SSD for my Thinkpad, as my cache read percent was consistently under 10%. Writing this post because I could not find any documentation online about replacing your mSATA SSD online.

So, a couple of things you need to know about replacing the mSATA SSD on your Thinkpad Twist

The cache partition has a limit of 32GB

You still need to set aside space (Size of your RAM + a bit more space[I gave it an extra 256MB]) for Intel Rapid Start Technology(RST) (The thing that makes Windows start up faster and recover from hibernate faster)

You can use the remaining space as a SSD

Steps to replace the mSATA SSD

Buy a mSATA drive of your choice. Recommended size to make full use of the cache and Intel RST: 64GB, more if you want SSD space as well. I went with the ADATA XPG SX300.

Delete the expresscache software from your computer

Unscrew the 2 screws at the base of the laptop that has the keyboard logo beside it.

The mSATA slot is on the top left corner of the motherboard. Unscrew the screw that keeps the mSATA device down, and it will pop up.

Extract the card out of the slot, and put in your new mSATA SSD into the mSATA slot.

Put the hardware back together.

Now the fun part. The installation of the drivers and making things work.

Boot up your machine, and install expresscache available at http://download.lenovo.com/express/HT074404.html and install it. The Win7 and Win8 versions are the same. The installation will create a partition for the cache using whatever remaining space there is on the mSATA SSD up to 32GB. Reboot.

And there you have it, your new mSATA SSD is now operational on your computer. If there’s extra space, you can create a simple volume using the Disk Management (Windows) software. [Win-x, and select Disk Management]

And finally, I’ve decided to release my code for a queue I implemented in PHP and MongoDB. For those who want to look at the code, here

So first, why use MongoDB as a queue?

I already had a distributed cluster of mongodb nodes

Single back end server app thingy to reduce complexity of app. Since I had already been using mongodb as my backend datastore, and mongo’s gridfs as my filesystem, it naturally makes sense for me to continue using mongodb for other parts of the system as well

I already had knowledge of mongodb

mongodb has a pretty neat feature that actually makes sense to run a queue on (more on that in a bit)

Components of the Queue

A PHP daemonA daemon is basically a long running process that will do stuff, in this case, call on other process to do the job. In the daemon, one of the more important aspects of it would be memory management, to prevent it from spiraling out of control.

PHP scripts to do your jobThe php daemon will essentially call on these scripts to do the job.

A mongodb capped collectionThis is by far in my opinion one of the most amazing aspects of mongodb. They have a collection(table in rdbms) called capped collections that you can actually tail. Yea, the tail -f command in unix. This means that you can literally have a persistent connection with the database, and it’ll just give you the new documents that have been added to the collection. Note that capped collections have a cap on it’s size, hence it’s name, and you cannot delete the entries. It’ll overwrite the earliest entry in the collection when it is full.

How it works

Basically, there’s 2 loops to keep the queue process running, the outer loop to recover from db going down, and the inner loop to basically run the queue process. It uses the tail function on capped collections to check for new additions to the table, and will then process each new job that comes in by running the exec function, which basically forks off a new process. This was done to limit memory usage since it’s a long running php process, and tests done have put it’s memory consumption at 13MB RAM.

If you’re using Codeigniter and is considering having a queue that’s implemented in MongoDB, check it out here!

Microsoft has really changed everything
Really. Putting touch into the OS was just brilliant. It makes things easier to do, and generally improve on the happiness level of me using the OS.

This is a damn good machine
With 8GB RAM, and a i7 processor that goes up to 3GHz, it is pretty powerful. So much that I can run 2 Linux VM’s (1 CentOS and 1 Ubuntu[no choice, for a module assignment])with the host OS and switch between them without facing much lag.

The keyboard is awesome :D
This new layout by Lenovo really makes typing alot easier.

Users might face problems with the laptop though. For example, when you install firewall’s such as Comodo, windows 8 will have weird behaviour, not sure if it’s just on the twist. Charms will lag really badly, and can only be fixed by logging out and logging back in. This behaviour happens only after you restart the machine.

The other gripe I had with the laptop was the intel Power Saving Technology. When the brightness is dimmed to minimum levels, this ‘technology’ kicks in, and will adjust your brightness based on the colours of the page, which will at times render your machine unusable. After poking around for a while, finally narrowed it down to this ‘technology’, and not Window’s adaptive brightness.

All in all, I have to say, this is the best laptop I’ve owned so far, and I have to say, better than my previous macbook pro, especially in the OS’s memory handling. Microsoft has really made a breakthrough this time round.

So I’m bored of using SSH Tunneling. Plus the fact that when I open too many connections, it sometimes dies. So I decided today morning that I will set up a VPN service that I can use to connect securely to the net that is actually designed for this (plus the fact it made the forwarding global helped too).

So my server is technically a OpenVZ server hosted by Alienlayer, with 512MB RAM and 2 2Ghz cores over in Las Vegas. Expected installation to be a breeze, a couple of yum install commands and I’d be done. Turned out I was wrong.

The initial setup was easy, following the instructions over here for CentOS and here for Windows. The problem that stumped me was the part when I couldn’t update the iptables. Searched the net for hours to no end. Finally, with the help of Olipro on StackExchange, realised that the problem was that the MASQUERADE module didn’t exist, and as far as I know, not virtualised yet. So couldn’t use it.

For people who are intending to set up a VPN server in future, here’s a link that you might find useful.