Several years ago, I migrated to my own
blog application in which I
used Ruby and Couchdb. 5 years later and I’m switching to to a static
site generator called Hugo. Hugo is written in
Golang and has some very nice features like templates, themes,
taxonomies and more.

There are manygood generators out there, but there
were a couple of things I liked about Hugo.

Since it is written in Golang, there is a static binary
available so there is no need to download dependencies.

For Christmas (or just after), I got a BeagleBone to try out some of the many input options that board has. I have done programming from the web and server perspective (a little GUI work), but I have not tried programming something that can interact with the physical environment. Initially, I just planned to get up to speed some with electrical circuits and gain a better understanding of some things that I took in college (Electrical Engineering I and II). Basically, I’ll make some LEDs light based on a program I write. I will probably start with Nodejs since the Bonescript is a pretty nice library.

Initial Setup

By default, the BeagleBone comes with the Angstrom distribution installed on the MicroSD. I wanted to get some applications installed on it that I was familiar with on other Linux distributions. This includes:

Background Processes

After getting tmux installed, I thought I would give it a try. I created a session and then detached the window. I then re-attached without a problem. Things seemed to be working fine. I then deattached and closed my ssh session. I then tried reconnecting and re-attached my tmux session (tmux attach). This did not work. I then hunted around and found other discussions on this but any of the recommendations people had did not work for me. After much searching and looking around, I ran across this discussion.

An the Angstrom distribution uses the systemd system and service manager. The configs for setting up systemd services are in /lib/systemd/system/. Angstrom being an embedded system appears to favor lighter weight libraries. For ssh, it uses dropbear and the file we are interested in is:

/lib/systemd/system/dropbear@.service

In this file, we need to tell the systemd to not kill process on ssh exec. To do this, we need to edit (or add) the KillMode line. It should look like this:

KillMode=process

The value of “process” tells systemd to only kill the main process. In this case, it will only kill the dropbear process and not other processes like tmux. After making this change, I rebooted to make sure it was in effect. I was then able to start a tmux session, deattach, end my ssh session, reconnect via ssh and attach to the tmux process.

Along the same lines, mosh requires a background process to remain running after connection via ssh to start it. Needless to say, mosh wasn’t working originally, but after this change, it works like a charm. Now that I have my BeagleBone setup, I’m readdy to start figuring out my first simple circuit to build.

This article might be more of a reference for me, but I did think it was an example of clever use of scopes in JavaScript. The problem I was trying to solve was running an SQL query to see if a row already existed in a table based on the key value and if it didn’t exist, go ahead and insert the row. I know I could have done it with a few callbacks or nesting functions, but I went in search of something better.

I had found several references to the async library. In particular, I thought the waterfall function would be a good fit with the exception of one thing. It wasn’t immediately obvious how to pass the key parameter to the first function which checked to see if the key was already used. I figured I could put all of the functions within a function and then reference some variables in the scope of the surrounding function. It would turn out something like:

I didn’t actually try this, but I was thinking of something similar. Unfortunately, testing the inner functions is difficult because they rely on the context of the surrounding function. I was in search of a better solution. In searching, I found this answer on Stack Overflow. It really related to what I was trying to do. I wanted to pass a parameter to the functions that got passed to the async.waterfall tasks, but I didn’t want to have to embed them. Here is an excerpt of what I came up with:

By calling a function with the parameters and then returning a function with the signature expected by the async.waterfall, the variables are available because they are in the scope. This reminded me of the information hiding examples in JavaScriptThe Good Parts.

Let’s breakdown the _doesKeyExists function:

_doesKeyExists(key)
(callback) =>
…

With some of CoffeeScript’s goodness (last expression returned and shorter function declaration), an anonymous function is returned with the callback signature. key is available within the anonymous function because it is in the outer scope.

By separating _doesKeyExists and _addEntry functions, these can now be tested separately, but can also be used in the sync.waterfall call:

async.waterfall [ @_doesKeyExists(key), @_addEntry(key, value) ], …

This seemed like a clever approach to use within Node.js, or in the browser, to pass parameters to a series of async calls. Again I’m relatively new to evented programming so there is probably a better way to do this, but I could still see this being useful in future hacking.

I decided to take a look at R this weekend between our family events. I had looked at R before when I ran across the R Tutorial. I bookmarked it and decided I would come back later. The other day at work a professor performing big data analysis started a conversation about R and offered to create some examples. He recommended working through the examples and tweaking those as a way to learn R.

Upon further consideration, I decided to go back to R Tutorial and work through the basics of input and data types. I felt like I needed a base before trying some of the examples. This approach seemed to work well-at least for me. I worked through the first 2-3 sections of the tutorial. When I got to the plotting section, things got interesting. Creating plots with the built-in functions seemed very straightforward and powerful. This led me on a quest to find more plotting libraries with subjectively more visually appealing graphs (the default plots aren’t too shabby).

I had used Google’s Charting Tools before and was curious to see if there was an option to use them from R. It turns out there is an excellent library called googleVis. This library creates the HTML, JavaScript and CSS required to create graphs from R. The resulting graphs are visually appealing and interactive. The one downside was, since the rely on the Google Charting Tools, they require an Internet connection to pull down the required JavaScript from Google. I will probably use googleVis in the future for some charts, but it wasn’t ideal for my current thought.

The next graphing library I looked at was ggplot2. From the website examples, this library appeared to be exactly what I was looking for. The documentation in the reference manual for each of the functions is well documented. When I started trying to use the library, I had already pulled in my dataset from a CSV file. My initial problem was figuring out how the data was passed to the ggplot function and how this related to the qplot function. The “grammar of graphics” was getting the best of me. After searching the web, I was able to find the R Cookbook which had more examples of scatter plots-not to mention a lot of other good R information. These examples provided the missing link for me: how to supply the data to the graphing functions to get the plots to work. With the combination of ggplot2 and R Cookbook, I was able to create graphs that provided some additional insight into the data.

Other Thoughts

Some other things I wanted to note:

Installing packages available on CRAN are extremely easy and they just worked.

After starting with the binary for R, I then found RStudio. It looks like it is in early development, but it quickly became my default environment. With an editor, workspace and console, it was hard to find something better.

These were just some of my early experiences with R. Overall, I really enjoyed using R and the ideas started flowing on how I might use it-everything from analyzing spending at home to analyzing data at work. Next, I hope to look at using R with ggplot2 to analyze the results from Apache Bench. If the results turn out interesting, I hope to find time to share them.

Running a virtual machine is extremely handy for development or trying out different configurations. VirtualBox is handy virtualization software especially since it is free. My goal was to setup a virtual machine to familarize myself with a few different configuration management tools. I could have tried Vagrant, but since we use Centos for most of our servers and Vagrant defaults to Debian, I went ahead and installed Centos myself from a boot.iso.

I wanted to start my Centos virtual machine in VirtualBox and then minimize it. I planned on using ssh to connect from my host machine’s terminal. Since my guest machine was running in NAT mode, I had to tell Virtual Box to forward a port from my host machine to my guest machine. I decided to forward port 2222 on my host machine to port 22 on my guest machine for ssh. From the Terminal, I ran the follow commands:

%% Create a polynomial expression from an array of numberspolyepr(List)whenis_list(List),length(List)>=2->P=gen_epr([],List),caseR=join_epr(P)of""->"0";->Rend;polyepr()->{error,"Need at least 2 coefficients"}.

Autotest, which is part of ZenTest, is a very handy testing application. It runs tests as changes are made to the code. When using it, I would accidentally leave it running and then notice something using up CPU cycles. It would turn out to be the autotest process that was still scanning files for changes every so often. I would then stop autotest only to be bothered to start it up again when I was working on the project again.

Awhile ago, I ran across autotest-fsevent for the Mac. It uses the Mac’s FSEvent core service to determine which files have changed (or to be notified when they are). This appears to have really helped the CPU cycles especially when nothing has changed. It is immediately notified when a file has changed.

I would also recommend upgrading to the latest autotest-growl as well. I was using an older version and there have been improvements.

This year I made the trip to Oshkosh for the EAA Airventure Airshow. One of the coolest planes I got to see there (and there were many nice planes) was the WhiteKnightTwo. The last time I was at Oshkosh, I was able to see Scaled Composites Boomerang which was an aerodynamic wonder. Looking at the WK2, the 2 fuselage approach reminded me of the Boomerang, but was different because of the symmetry of the plane. In looking for differences, the right fuselage had on extra exhaust on the upper outside in the back. I could not reason what this could be used for, but it was a difference. It also appeared that there was more visible wiring in the left fuselage presumably for instruments for flying.

Overall a fascinating aircraft especially if it allows for the start commercial space flight. I can’t wait to see it happen and the folks at Scaled have done some pretty remarkable things.

Last week, my wife and I put down new mulch in the natural areas of our yard. As I was loading cart after cart of mulch to lug across the yard, I started thinking we might not be able to get all 8 yards of mulch spread that day. I really wanted to get this done so I didn’t have to worry about it over the weekend. Looking at the pile, it did not seem to be getting any smaller and it was approaching lunch time. A thought came to mind: how can I make this into a smaller task that I could accomplish in a few hours instead of looking at the entire pile over the period of a day? What if I worked to split the mulch in the middle into two separate piles (dividing and conquering)? That could make the task interesting (kids might like it) and give me a smaller goal to attempt to reach.

How many times do I find myself asking that question? How can I break task X down into something smaller so I can feel like I am accomplishing something? My understanding of goal setting comes from studying of GTD, Agile development processes and a history of playing basketball. GTD and Agile development have taught this as some of their core concepts (if I understand them correctly). The further along I got playing basketball, I found we were always breaking down plays or watching videos in smaller chunks to analyze how we could get better. The task of breaking down issues into smaller tasks seems to be fairly important and a skill that I frequently find myself using (as long as I don’t over plan).

Back to my pile of mulch. I was able to get it divided into two halves.

From there, I proceeded to break it into smaller tasks. I focused on the smaller half first and then started breaking off the corners of the larger half that was left. It sure did help and we were able to get all of the mulch spread by the end of the day (YAY!). Once again, setting smaller goals, although they may not have made a difference in the speed that I got my part done, they did help me focus on small units of work that I needed to get done.

While playing sports (namely basketball) for a number of years, I played on a number of different teams and had a number of different coaches (2 main coaches in high school and college). Looking back, it was very interesting to see the different styles in coaches and players. I was pretty lucky for the most part, the majority of my experiences were on teams that understood the concept of “team play”.

One thing I didn’t completely realize was how important the coach’s leadership was and the values they instilled. When a player first joined the team, they wouldn’t completely fit in or at the very least they would struggle a litte. Having these core values, provided by the coach originally and promoted by upperclassmen, the freshmen (or new transfer) has a base to build from. As they grow as a player and person (going from freshman to sophomore and so on), their playing skills would develop, but they would also continue to ‘buy-in’ (or gel) to the team values. Of course, this happened at a different pace for everyone depending on the player.

I am starting to see that again in the agile software development. As teams shift and grow, you go through this process over and over. I think it is important to have those underlying values, but the overall appearance of the team will reflect the current players. I am learning there is a delicate balance between emphasizing values, letting the team gel and using the strengths of the new players. If too much emphasis is placed on values, it suppresses the strengths of new players. If the values are shifted too much, you lose the history that has brought you success in the past. I feel like it is somewhere in the middle that allows your team to gel the quickest depending on the number of returning starters you have from the previous season (or project).

About

I'm Richard Outten and this is my blog with random thoughts
mostly about software development & teams and
general hacking.