As part of my current contract position, I have created a plugin for Moodle that others may find useful. The plugin takes the form of a block that will open a popup window upon the user logging in containing different content determined by a condition set by the Admin. I’m still fleshing out what conditions will be available to the Admin. The idea is to allow Admins to set multiple conditions in order to deliver different content to specific groups of users. The current functionality displays different content based on the last time the user logged in. The popup shares the same theme but only includes the top tool bar rather than the standard layout. No other blocks are displayed and neither is the breadcrumb navigation. This gives the Admin the whole page to work with when adding content and also allows the popup to remain small. These popups are not meant to be fully fledged Moodle pages. Users are meant to read the content then exit the page rather then navigate Moodle within it.

The reason I chose to make this plugin a block is because I see the block being used to give a quick summary or notification of the popups content. The plugin is still in development so a better use of the block content space may become clear as I work on it or I may change the plugin type to get rid of the block functionality. The code for this plugin is available on my github account. Check back soon for more updates about this plugin and be sure to check out my other Moodle plugins!

The light sensor (plus a few other goodies) arrived from Adafruit a couple of week ago. Since then I’ve had a chance to bread board the final circuit that will make up the smart lamp. I still have to order a case for the final product and decide where to mount the light sensor for optimal exposure. The plan has been modified from creating a smart lamp to creating a smart extension cord as I found out that there are multiple sets of lights that will run off this device. So I will mount the finished circuit to the end of an extension cord instead which will allow whatever is plugged into it to be activated when light levels drop below a set threshold.

I’ve posted the code that runs on the Adafruit 5v Trinket on my github page for those interested. Working with the Trinket itself has been mostly pain free. Adafruit has a great tutorial section for setting up and working with the Trinket. However, I did have trouble getting the light sensor to work with the Trinket when I first started out. Since the Trinket is not able to send back serial communications to a connected PC, I had to switch to an UNO R3 to see what was going on via the serial monitor. The light sensor from Adafruit I chose is perhaps not the best suited for this project because it requires a reference 3.3v inorder to gaurantee the best accuracy in taking readings. This is despite the fact that this sensor can be supplied with up to 7v. In the end I was able to get it working with the 5v Trinket and accuracy does not seem to be an issue from the subjective testing I’ve done. Keep an eye out for a post with the final circuit all soldered together!

My Dad has been doing some remodeling of his living room recently and, as part of this, bought bookshelves and some lights to go with them. He wanted some way to turn them on without having to deal with the little switch included with the lights so he thought it would be nice if the lights turned on in the evening on their own. So the task fell to me and and I happily accepted.

This project is actually fairly simple as only a few cheap parts are needed and little coding is required. There is a real danger of electrocution with this project however as we are dealing with voltages from 120V to 220V ( depending on where you live). Please be sure to the necessary precautions when working with live wires for this and any project you take on.

Since I didn’t have access to the lights when I started nor did I have a light sensor handy, I started by creating a prototype to test the relay with an Adruino Uno R3. I wrote a simple loop that closed the the relay, waited a second then opened it again. This was repeated so long as power was supplied to the Arduino.

Relay connected to extension cord.

For the next prototype I simply added a sensor I had lying around to test controlling the relay based on the input form a sensor. I sensor I chose was a IR motion sensor. The end result was an extension cord that would send power to whatever was plugged into it when motion was detected. I’ll post an update once the light sensor arrives and I find time wire up a circuit for testing.

Right Tool for the Job

Over the past few months, I have been endeavoring to clean up the construction of and add features to my object avoiding robot Geoff. In doing so, I’ve become increasingly frustrated with the hardware I currently have. This frustration was only magnified after using the Lego NXT line of robot controllers in a machine learning class I took. I really appreciated the flexibility the Lego gave me to quickly prototype a new hardware configuration. While I love Lego, I’m not as in love with their controllers and I wasn’t about to abandon my arduino or beaglebone. Instead I chose the Makeblock hardware system which the creators characterize as “Lego for adults”. If anyone reading this remembers Meccano, it’s very similar.

First Impressions

I bought the Ultimate Kit from robotshop.ca in blue. All the beams are made from extruded aluminium. They are very light weight but after putting together the robot arm car, they are sturdy. The only set of instructions that come with the kit are for the robot arm car featured on the box and website. Makeblock advertises that, with the kit, there are ten robots of Makeblock’s own design can be built. After some searching, there are no formal instructions for the other 9 robots, rather CAD diagrams that can be exploded to show the construction of them in detail but not step by step.

Due to some errors, my order with robotshop.ca was upgraded to the Ulitmate kit with electronics. I initially ordered it without electronics since I have an array of sensors plus an arduino. Since I had the electronics, I went ahead and put together the robot arm car. The construction of the car was vary straight forward and only a few areas gave me trouble. This trouble amounted to some of the hardware used being quite small in some cases coupled with fitting into a hard to reach area. Adjusting the pulley for the arm so that the belt did not skip was also a somewhat involved process. In spite of these issues though, construction with the Makeblock hardware was a joy compared to using the rag tag collection of hardware I have been using up until now. All the bolts fit nicely where they were supposed to and the beams all lined up as illustrated in the instructions.

Once the robot car was assembled and the electronics installed then the real problems began. The robot arm car is not autonomous but rather it is meant to be controlled via bluetooth from a smartphone using an app available from Makeblock. I was using a Motorola X 2015 with this app and was never successful in getting it to work. The app always had trouble connecting to the bluetooth module on the robot. If it did connect, none of the pre-configured control options worked nor did trying to make a custom one. I found little documentation online in relation to the app or troubleshooting connection issues. Other customers reported similar issues on the official forums but there was little response from Makeblock. Thankfully, another customer recommended another android app called Robot Bluetooth Control from the Google Play store.

Using Robot Bluetooth Control, I was able to connect to my robot but the sketch loaded on the arduino clone it came with did not recognize the commands the app was sending. Since I could not find any copy of the sketch used by Makeblock to control the robot arm car, I was forced to write my own. After some trial and error, I was able to write a program that recognized the commands sent by the android app and perform various functions when they are received such as move the car or the arm. I have pushed the sketch I wrote to github for others to use as they wish.

Cautiously Optimistic

Overall, I’m very pleased with the Makeblock Ulitmate Kit. The hardware seems to be just what I need to quickly prototype my projects moving forward. This is the kind of kit anyone serious about creating projects that interact with the world around them but lacking a full blown workshop needs to pick up. I live in a high-rise so I have neither the space nor access to a machine shop or even a garage so making my own parts is out of the question. Kits like these make building complicated mechanical projects possible for those who otherwise cannot build or access the parts they need. The Ulitmate kit is a bit pricey but they sell other kits that start well below $100. I would caution anyone hoping to get this for a child or anyone absolutely new to arduino programming as the Makeblock ecosystem of software for their products still needs some work. The documentation for each sensor is good enough that, with some time spent creating and testing the sketch, the robots they advertise will work. I would not expect them to work out of the box however which is disappointing especially for those that sprung for the Ultimate Kit.

Stepping Up

This summer sees me working in an Web Administration team for a medium sized outfit. My days have consisted of installing Java Enterprise applications on servers in a wide array of pre-production environments. While this part of the job has been my least favourite, the environment I work has inspired me to try something new. Using the ample resources of my desktop computer, I’ve set out to create a small network of nodes. This small network will be playground for me to expose my self to web application development and network maintenance. Since starting out in Computer Science, I knew I would have to face web programming at some point as web applications are a booming industry.

Lost with a Map

Not quite sure what I wanted to do, I set out on Google to find some guidance. After a only a few searches, I came across this blog by Steven Gordon. It outlines the steps he took to create a virtual network made up of 3 nodes in VirtualBox. Using his steps as a guide, I was able to create a similar network of 3 nodes. Not all the steps applied to me and not all the steps were necessary or up to date but there weren’t any impassable roadblocks preventing me from doing most of the setup in one Saturday evening.

Devil in the Details

Each node is a copy clone of a base install of Ubuntu Server X.X.X. This install was configured with 512MB memory and 2GB harddrive space. I’m running these nodes on a Windows 7 Professional host with 8GB Memory and an i5-3420 processor. The C drive is on a dedicated Samsung Evo 840 256GB SSD. There is also a HDD with a XXXGB for data storage. The host PC also has an Nvidia GeForce 660ti video card.

With this set up, I’ve had very few problems with memory management. I can run Borderlands 2 on high settings at 1600×1050 with no issues. However, doing so with a a ~10 tabs of chrome open will fill up the memory close to capacity but without throwing any low memory warnings. I’ll have to pick up more memory if I want to grow this network at all and keep it running while during the odd gaming session.

A couple of days after creating these nodes and leaving them running day and night, I could not get the host PC to output video. I have the monitor setup to go into power saving mode after a few minutes but the PC should never go to sleep or hibernate. Inspecting the Event Viewer didn’t shed any light on what the issues was though there were a few memory warnings thanks to chrome. I plan on doing some video card testing to make sure that there aren’t any issues there as well as testing the memory though the current modules are fresh from an RMA.

After some verification of the current setup, my next step will be to decide what exactly I want to do with each node. I have a PHP site I’d like to host and continue to develop on one of them. I’d also like to look into various network elements like routers, or the Client-Server model as well as network security.

Reassessment

After letting this sit for a couple weeks, I decided to scale back my initial plan. I spent an evening creating a Ubuntu LAMP server and setup my issue tracker web application on it. I’m going to focus on developing that application further which may include creating a server to host databases for this and other web apps down the road.

My current Ubuntu LAMP server is setup using a 8GB VMDK instead of the 2GB I was using for the nodes above as I was having trouble staying within that 2GB limit. I also setup the server to run with a bridged adapter rather than the NAT and Internal setup of the nodes. The bridged adapter essentially setups the VM as a separate machine on the LAN as if it were a physical machine connected to the router. This has the advantage of allowing me to easily ssh into the VM from any PC on the LAN. Using a NAT setup I could have forwarded a port for ssh from other machines on the network but now I don’t have to setup port forwarding for every service that may want to connect to that server in the future.

While the end result may not be what I initially intended, this has been a valuable experience. I’ve gained some hands-on experience configuring VirtualBox VMs and Ubuntu. With this solid Ubuntu server running, I can easily clone it and create nodes for whatever purposes that may arise in the future. For now, I’ll focus on what I enjoy that most: coding. I hope to throw together a post about the issue tracker web app I am developing soon and an update on Geoff once some parts I’ve ordered come in.

Where Does the Time Go?

With the end of March approaching, many of us are looking forward to spring but I’m going to take a look back at the work done to get this point. Working on our plugin for Ushahidi has differed from other software projects substantially. Starting out, our group spent time familiarizing ourselves with the Ushahidi project. Unlike other university projects though, started with the code and the documentation rather than just the documentation. This code was the code we’d have to work with and eventually edit for our own means which has made a very novel experience for me. Reading of other’s code has been a core element of the development of our plugin as working in a group means having to understand and using the code that others in the group have written.

Git it Together

One challenge our group has faced is working with the same code. Each of us has our own copy of the plugin forked to our personal public repos on github but, as of this writing, the code on the our private group github repo is not the current code. The problem has not been infrequent commits but rather no pushing those commits to the github group repo. this has led to a few challenges in trying to make progress.

One issue has been knowing what needs to be done. I have found it hard to know what to work on with out having the most current version of the code as I do not want to duplicate work others have already done or work on the same features as others. Working on the same features without knowing the specifics of what others are working on can lead to some pretty ugly merge issues and also wasted time.

My own workflow includes the following git branches in my repo:

develop: the working directory for our group’s repo on github. Has the current code found on github.

master: the master directory linked to the master on our group’s github.

myDevelop: the working directory for my forked version of our group’s repo on github. Holds all my changes that I have not merged into the group’s github repo. I’ve pushed these changes to my github account.

From the pair programming I’ve done, I fear that merging the groups code together at this point will be no small task.

A New Way of Developing Code

A lot of the work I’ve done has been working with the code that a team member has written. He flew out of the gate and wrote much of the code that did the job of integrating the flot.js charts into a plugin for Ushahidi. This has led to me using this code to add more charts and streamline the process of adding charts. Much of the initial code has been replaced either partially or completely. It wasn’t technically wrong but it wasn’t flexible. Adding a new chart involved a lot of duplicate code if that chart type was already being used. SQL queries also had to be hand crafted for each new chart which ate up a lot of my time as I have never dealt with mySQL before this project. The solution to code redundancy is straight forward and function were created to handle multiples of the same chart type. Streamlining the SQL queries that are required to build the JSON objects needed by the charts has been less so. One member has been working on using switch-case statements to allow queries to be built using a string rather than having to handcraft the query for every chart. MySQL has been not taken kindly this attempt as syntax errors are a common obstacle.

While working with a large code base such as Ushahidi can be daunting, there are advantages. I find it can take me a while to get into developing a new program, especially as they get increasingly complex, as the starting point is not always clear. Or the starting point may be clear but the specification may be open ended causing a abrupt halt once the trivial components have all been fleshed out. This halt is partly due to the difficulty in visualizing code. An established code base can be very helpful in this instance as it may be ripe with code similar to what you need to implement the feature you after. This was the case with our plugin as the Kohana php framework underlying Ushahidi is fundamentally modular. Ushahidi is a collection of modules, some of which are not optional. To build our plugin, we simply mimicked how the other modules of Ushahidi were implemented.

Dear Diary

During the last month, I kept track of our group meetings and pair programming in the following file:

09/03/2014-----------------------------
-group meeting from 1330 to 1530
-trio programming from 1400 to 1500
---------------------------------------
10/03/2014-----------------------------
-Spent about 2.5 hours on the following:
-> create view for a new chart
---add the required HTML
---add required javascript hide and show the chart as well as plot it
-> add controller code that retrieves the JSON for the new chart
-> add the model function that queries the database
---query does not properly formatted at this point
---------------------------------------
15/03/2014-----------------------------
-spent ~2 hours:
-> working in analytics/model/analytics.php to build a new SQL query for
a new pie chart that will display the number of incidents per country.
-> starting to streamline javascript in controller
---------------------------------------
16/03/2014-----------------------------
-group meeting from 12:30-14:30
-prep for presentation
-discussion about d3
-Julian and charlie worked on integrating d3
-Julian and I worked on building SQL queries
-I worked on reducing redundant code in /analytics/controllers/analytics_json.php
-> added arguments to functions to allow functions to be passed as
parameters rather than copying the function code for every chart we want to
generate.
----------------------------------------
18/03/2014------------------------------
-Spent about 2 hours preparing for the part B presentation. This involved taking
screenshots of code changes and writing relevant descriptions as well as writing
slides that describe issues we had encountered as well as our goals moving forward
----------------------------------------
20/03/2014------------------------------
-pair programming with Julian from 11:30-13:30
-looked at what files need to be modified to add filters
-looked at how to grab categories for filters
-> grabbing from database
-> inspected the filtering in the reports tab
-> discussed what we want to allow users to be able to filter
-> discussed how we would deal with charts that require specific x and y
----------------------------------------

While this may be a crude way to do it, I found it fairly effective though there have been a few shorter (~30 mins.) sessions that occur after class between group members that haven;t made it in here. These sessions were only tangentially related to developing the plugin as they focused around using git and issues we were having with file permissions.

Looking Ahead

At this we point, we can see the finish line but that doesn’t mean there isn’t a significant chunk of work left. We would like to have filtering working within the next week so that users can choose what that charts display. Adding a few more charts and chart types would also show potential of analytics within a data driven project like Ushahidi. A less sexy item on our to do list is making sure the code adheres to the Ushahidi projects coding standards as we’ve neglected this aspect of contributing to a software project. Finally, we would like to streamline the code as much as possible which involves mainly adding flexibility through functions instead of hard-coded javascript for each chart.

I am taking a class in University that has us contributing to a Free Open Source Software (FOSS) project as a means to learn about real world project development. FOSS is a great way to cut your teeth as a software developer because these projects rely on contributions from anyone with the skill and time to do so. Contributing to a FOSS project allows a developer, especially one just starting out, to build their portfolio outside of school and on top of whatever employment they may or may not have as a developer. Large FOSS projects, Firefox for example, are usually managed professionally by a dedicated team and so contributing to them builds experience working within a development team and environment where there are coding standards to follow and team members who you work with and who scrutinize your work.

Ushahidi

Ushahidi is a FOSS project developed to take in and represent data geographically. It was initially developed in response to the unrest following the 2008 kenyan elections. Ushsahidi was used to take in reports of unrest and represented them on a google map of Kenya. Since then, Ushahidi has been deployed for tracking disasters to pot holes via user submitted reports and boasts 3000+ deployments.

Kohana

Ushahidi 2.7.2 uses the Kohana php framework. This framework uses a modular cascading file system. Ushahidi is modular in its very nature, with the default directories present in a basic Ushahidi install being essentially hard coded modules. We added to this by adding out plugin to the plugins folder.