I want to map each row with each corresponding value of x. This operation is not standard matrix multiplication, though I feel like it should be!

I admit I slept in Linear Algebra class. Thus I was a bit dumbfounded about how to express it using normal matrix multiplication. While I could do it using LinAlg’s mapping functions, I knew that it could be done because it was all linear transformations.

Luckily James Lawrence, who maintains LinAlg was emailing me, and asked me about it (thanks!). He gave me some code that did it both ways. After I read it, I slapped myself.

I wasn’t sure which one was faster. Maybe they’d be about the same. Maybe doing matrix multiply would be slower because you’d have to go through all the multiplications. Per James’s suggestion, I benchmarked them, in the fine form of yak shaving.

Run on a laptop 1.1GHz Pentium M, 758MBs, I did two experiments. For one experiment, I kept the matrix at 800 rows, then I grew the columns. Let’s see what it looks like: time (secs) vs column size

I should have labeled the axis in GnuPlot, but you’ll live. The y axis is the number of seconds, and the x axis is the column size. Uninteresting, but nice to know it’s linear. The two functions aren’t significantly different from each other in the amount of time that it takes. I expected the map (calculate2) to be much faster since it didn’t have to do all the multiplies. oh well.

I almost didn’t run the second test, but it proved to be a bit more interesting. This time I kept 800 columns and grew the number of rows. Same axis. Different graph:time (secs) vs row size

Whoa! It’s exponential. Or quadratic. I can’t tell. Anyway, anything curving up is bad news. I suspect this might have something to do with row major/column major ordering. C stores matrices row by row, whereas Fortran stores it column by column.. Update: As corrected by James L., in the first experiment, growing columns will create more multiplies inside the diag() call, but the size of the diagonal will stay the same. Growing the rows, however, will create less multiplies inside the diag() call, but each increase in row size will increase both dimensions of the resulting diagonal matrix, giving us n^2. So it’s quadratic.

So what I went looking for wasn’t what I meant to find. But given that I’ve read about it before, it wasn’t super interestingit would have made sense had I thought about it, I guess it’s not super surprising. Let’s see if we can use transpose to cut down on the time. For one part, we’ll grow the rows as before, and compare it to growing rows, but transposing the input then transposing the output, to get the same deal. What’s it look like:time(secs) vs row size

This is good news. Even if transpose is an extra couple of manipulations, it saves us computation for bigger matrix sizes. The most interesting part of the graph is the crossing of the two graphs. If somehow, LinAlg (or any other package for that matter) can detect where that inflection point is going to be, it can switch off between the two. The only thing I can think of is another package lying underneath doing sampling of each implementation randomly whenever a user calls the function to do interpolation of its growth curve, and then calculate the crossing analytically. I don’t currently know of any package that does this (or if it does, I don’t know about it, cuz it already performs so well by doing the switch!)

This was a nice little diversion from my side projects…a side project of side projects. Back to learning about information gain and its ilk. At least something came out of it. I have a nice little experiment module that I can use to do other experiments. And I spent way too much time on it not to post something…

This was the example given for calculating the Fibonacci sequence in parallel. This is the standard mathematical way to define it, and it looks clean enough. So instead of trying it out in Cilk, I fired up Erlang to try my hand at doing a port. I found it a little bit difficult because while you can easily spawn processes in Erlang, there was no quick way for the parent process to wait/sync/join child processes and get their results. Since that was besides the point of the exercise, I fired up Ruby, even though they had a slow Threading library (which is suppose to be fixed in 1.9, plus Fibers!) I’ll do with it Erlang some other time.

This one ran much much faster. It took about 0.02594 seconds. At this point, it’s probably the overhead of thread creation that’s making it take so long to run. Maybe with green threads or lightweight threads, the threaded version would run much faster. That makes me want to try it in Erlang just to compare. But wtf, adding shouldn’t take that long, even if it is 0.025 seconds

When I thought about it, it was an efficient algorithm: there’s a lot of wasted cycles. In order to compute f(n), you have to calculate f(n – 1) and f(n – 2) in separate threads.

The f(n – 1) thread requires it to spawn two more threads to compute f(n – 2) and f(n – 3).

The f(n – 2) thread requires it to spawn two more threads to compute f(n – 3) and f(n – 4).

Notice that both the threads for f(n – 1) and f(n – 2) have to spawn two different threads to calculate f(n – 3). And since this algorithm has no way for threads to share their results, they have to recompute values all over again. The higher the n, the worse the problem is, exponentially. To calculate the speedup given to an algorithm by adding more processors, you calculate the amount of total work required and divide it by the span of the parallel graph. If that didn’t make sense, read lecture one for Cilk, which is where the diagram comes from. So for fib(n)

Twork = O(n^2)

The total amount of work is the total number of processes spawned. Since every f(n) recursively spawns two other processes, it’s about n^2 processes.

Tspan = O(ln n)

The total span is how many nodes a particular calculation traverses. A la the diagram, it’s about the height of the tree, so that’s about ln n nodes.

Therefore, for fib(n), the processor speed up is at most:

Tw / Ts = O(n^2) / O(ln n)

I don’t know of any reductions for that off the top of my head, but you can see that the processor speedup gain grows somewhere in between n and n^2. On one hand, it means this algorithm can benefit from speedups by adding up to somewhere between n and n^2 processors. However, that also means that to make this algorithm go as fast as it can to process fib(1000), you need more than 1000 processors to make it worthwhile. Not so good for a parallel program that’s just doing addition.

As a last version, I wrote one that computed the Fibonacci from 0 up to n, and keeping the total as I go along, instead of the recursive version that has to work its way n back down to zero.

It’s not space effective since I wrote it quickly, but this beat the pants off the other two running at 0.00014 seconds. As you can see, you’re not recalculating any f(n) more times than you need to.

I wish Cilk had a better first example to parallel programs. Given that the guy making Cilk is the same guy that co-wrote the famous mobile book for algorithms, I guess I was surprised. However, it was a fun exercise, and made me think about algorithms again.

I’ll find some other little project that requires me to write in Erlang, rather than falling back on the comfortable Ruby. snippet! Below if you want to run it yourself.

Like lots of people on facebook, I’ve been playing scrabulous on facebook. I’m not much of a wordsmith, but I have fun playing people. Justin told me that his goal in life was to span two triple-word scores, to get a 9x word score. So not to be outdone, I wondered what words would be able to give you a 27-word score if you spanned across all three triple-word scores. We would need fifteen lettered words.

Since you can only put down at most seven tiles per turn, there needs to be a word in-between the triple letter scores to help you fill it out. These “bridge words” can’t already be on the triple word score already, and they must be between two and six letters long on each side, where the total length of both words has to be greater than eight.

So I wrote a program in a couple hours to find them. I did take into account whether a word was possible to make based on the scrabble tile distribution, as well as taking into account blanks. There’s 286 of them thus far in the TWL scrabble dictionary. I didn’t find ones that used more than one bridge word on a side. The points aren’t completely accurate either.

The first number is the points you’d get, and then the two bridge words. Based simply on the probability of drawing the numbers from a full bag, “irrationalities” is the most likely word. (in reality, this never happens, since you need to draw tile in order to place those words to reach the side.)

459 : irrationalistic : [“ratio”, “alist”]

You can score a whopping 459 points with it. The word that has the biggest word score is “coenzymatically”

972 : coenzymatically : [“enzym”, “tical”]

Yes. “tical” is a word.

ti·cal –noun, plural1. a former silver coin and monetary unit of Siam, equal to 100 satang: replaced in 1928 by the baht.

There are quite a number of common words, you wouldn’t think, as well as quite a number odd ones. As a note, the point scores aren’t exactly accurate. I didn’t take into account the double letter scores that might occur if you place a letter one it. But given that the multiplier is 27 here, and I picked the longest bridge words (which usually cover the double letter score), it shouldn’t affect it too much. I had held off posting it until I fixed that, but this was sort of a one off amusement and curiosity, rather than anything significant, so I figured I’d just post it. Enjoy!

It’s not too common that I get forwards nowadays. With the advent of social news, all the stupid links have migrated there. But on occasion, I’ll get one from the older crowd. This one was a riddle with a movie of the answer attached.

What common English word is 9 letters long, and each time you remove a letter from it, it still remains an English word… from 9 letters all the way down to a single remaining letter?

It was only one answer, however, which it gave as “startling”. I ended up wondering if there were more than one, so I wanted to see how fast I could write something to give me the answer. It’d be good practice, since most web programming is design and architectural hard, rather than algorithms hard. Besides, it’s been a while since I wrote a recursive function.

Embarrasingly, it took 2.5-3 hours. I thought I’d be able to knock it out in one. I had some problems first removing a single letter from a word. Ever since I came to Ruby, I hardly ever deal with indicies, so finding those methods took a bit of time. Then, the recursion is always a bit of a mind bender when you don’t do it often.

I also spent some time looking up what were considered one letter words, but then found out that there’s a whole dictionary of one letter words. So I only considered “i” and “a” as valid one letter words. I also threw out all contractions.

See if you can write shorter/faster/better code. It’s certainly a better workout than fizzbuzz. Seeing how it took me a bit, and I didn’t clean up the code, I set the bar pretty low to beat. There were other things that would optimize it. I didn’t throw out shorter words to check in the dictionary if I already checked them in a longer word–ie. I just ran down the list of dictionary words. Try it out yourself, in whatever language. (This sounds like a good beginning problem to write in Arc.) Go ahead. I’ll wait.

Got it?

Here’s the list I came up with along with their chains. You’ll notice that it’s actually a tree that branches.

I feel like I might have covered this before, but I was looking for a way to test respond_to. I had found this post on how to test it, but after looking at it for a while, I found myself rewriting it. Mainly, I took out parts that convert the Mime types, and inserted Rail’s own Mime type objects. You can use it like this:

request_mime(:fbml) do get :list assert_response_mime(:fbml)end

request_mime("text/xml") do get :list assert_response_mime("text/xml")end

Just include it in your test_helper.rb file in test/

class Test::Unit::TestCase include Threecglabs::MimeTestHelpersend

Here’s “mime_test_helpers.rb”. Just throw it in lib/

module Threecglabs module MimeTestHelpers

def self.included(mod) mod.class_eval do include MimeRequest include MimeAssertions end end

I didn’t think I had to do this, but I ended up writing a filter that acts like a switch statement for different MIME types. Let me explain. Normally, in Rails, you can respond to different requests for different content with something like this:

When I started using facebooker library, it already came with an authentication before_filter. That means we have two authentication filters, one native, and one for facebook. Mobtropolis users don’t have to be in facebook to use it, and facebookers don’t have to sign up again in mobtropolis to use it.

However, since before_filters are executed in succession, it leads to a case where the facebook authentication would be called if html was requested, and vice versa. The alternative was to take apart both authentication filters, and create a monolithic filter to handle the two different cases. Instead, I did this:

By the way, I tried to alter the filter_chain as a request came in. Filter chains are copied and passed around the filters, so you can’t write a filter that alters the filter chain. So don’t waste your time crawling around in the guts of Rails to do this like I did. It’s just as well, as that’d be a nightmare to maintain.

It does have some weaknesses though. You can only assign the filters to the same set of :except and :only options in the filters.

It ended up the code for this sort of magic was fairly easy. I’m not sure if there’s an easier way to do what I wanted, but I’ll see if Rails core people would find it useful (or not). In the meanwhile, for those of you Rubyists that have written plugins before that want to play with it. As with the usual mumbo jumbo, it’s provided as is, I’m not maintaining it, and do whatever you want with it:

module Threecglabs module Filters

# MimeResponderFilter module MimeResponderFilter

def self.included(mod) mod.extend(ClassMethods) end

# Filters can respond to different mime types, so that you can use # different filters depending on which mime type is being requested # # before_responds_to_filter :except => [:login, :signup, :forgot, :invite_request, :profile] do |format| # format.html :authentication_filter # format.fbml :ensure_application_is_installed_by_facebook_user # end # # This way, one can take the appropriate actions in setting up authentication # from different mime types, and still separate the implemenation of the different # kinds of implementations # # The formats also take blocks, like regular filters # # before_responds_to_filter :only => :home do |format| # format.html do |controller| # return if controller.logged_in? # controller.send(:redirect_to, :controller => :home) # end # format.fbml :ensure_application_is_installed_by_facebook_user # end # # NOTE: an :all format defaults to :html, therefore, a format.html is required module ClassMethods def before_respond_to_filter(options = {}, &block) before_filter MimeResponderFilter.new(&block), options end

private # This is a call that implements a MIME responder filter class MimeResponderFilter#:nodoc: attr_reader :filters

# implements the "format.#{mime_type}" part of the filter def method_missing(mime_type, method_name = nil, &block) if block_given? @filters[mime_type.to_sym] = block else @filters[mime_type.to_sym] = method_name.to_sym end end end end

In the last two months, when I was adding more features to mobtropolis, I found it painful to try and lay things out all over again from scratch. As a result, it sucked to see ugly layouts on the new pages juxtaposed with all the styling I had done before. It wasn’t until a week ago that I said to myself, “Stop the madness!” and started refactoring my views–something I never thought of doing much of until now. When you don’t, the barrage of angle brackets blows out of proportion and it starts to look pretty damn fugly with complex views.

What I should be able to do is take common mini-layouts in my views and make them available as helpers so that I can think in terms of bigger chunks to piece together pages, rather than in divs and spans. In addition, it makes your interface more consistent for your users.

Some good resources were presentations from the 2007 RailsConf, like V is for Vexing and Keeping your views DRY. While a lot of view DRYing talks about form builders, I didn’t see any on table builders, so I decided to take a stab at it. Personally, I don’t like to overuse tables for layouts. But as certain elements in my page layouts have been repeated, I refactored them into first helpers, and then when I did more than one, I extracted it out into a simple table builder. This is how you’d use it:

For example, I have a mini-layout where I show simple stats:

Here’s how I used a simple table builder to display the above:

And I find that I started using the same sort of thing in other places, like in a user’s profile:

I cut out some details so you can see that it’s just a block that gets passed a ScoreCard object, from which you call placard to add another score to the score_card. It sure beats writing

and

over and over again.

To declare the helper, we create a helper with the structure of the table inside the declaration of a ScoreCard object. We have a ScoreCard object to hold the contents of the placards. When they’re called in the block above in the template, they get stored in the ScoreCard object, and not written out to erb immediately. That way, we can place them wherever in the table we please, by making the call to card.display(:placards):

So then what’s ScoreCard look like? Pretty simple. It has a call to each cell that can be filled in the mini-layout. It’s kinda analogous to how form_for passes in a form object, on which you can call text_field, etc.

Notice that there’s a call to cells() to initialize the type of cell, and a method of the same name that builds the html for that cell. If you have other types of cells, you simply put it in the list of cells, and then create a method for it that is called in the template. By convention, you’d stick the html of the cell contents in “@#{name_of_cell}”[:html], and if you wanted, pass in the html_options, and stick it in “@#{name_of_cell}”[:options]. Then, you can access those in the helper wherever you want.

Let’s try another one. I have a mini_layout with a picture, and some caption underneath it, like a polaroid.

I’ve tried to pull all the plumbing out into TableBuilder (dropped it into lib/), and only leave the flexibility of creating the table structure in the helper, and the format of the cell in the object. It ends up TableBuilder isn’t too complex either. It uses some metaprogramming to create some instance variables. I know it pollutes the object instance variable namespace, but I wanted to be able to say @caption[:html], rather than @cells[:caption][:html].

I’ve found have these helpers cleans up my views significantly, though I have to admit, it’s not exactly easiest to use yet. In addition, I’m not exactly thrilled about having TableBuilder inherit from ActionView::Base, but it was the only way I could figure out to get the call to concat() to work. In any case, the point is to show you that refactoring your views into helpers is a good idea, and even something like a table builder goes a long way, even if you don’t do it the way I did it. Lemme know if this helps or hinders you. snippet!

A week ago, I took a break from Mobtropolis, and…of all things ended up writing a simple distributed crawler in Ruby. I hesitated posting it at first, since crawlers are conceptually pretty simple. But eh, what the heck.

This was just an exercise to do some DRb and Hpricot, so don’t use this for your production work, whatever it may be. An actual crawler is far more robust than what I wrote. And don’t keep it running hammering at stuff, since it’ll get you banned.

And that’s it. It returns documents in an XPath traversable form, courtesy of Hpricot.

A web crawler is a program that simply downloads pages, takes notes of what links there are on that page, and puts those links on its queue of links to crawl. Then it takes the next link off its queue and downloads that page and does the same thing. Rise and Repeat.

First, we create a class method named start that creates an instance of a webcrawler and then starts it. Of course, we could have done without this helper method, but it makes it easier to call.

This bears a little explaining. The first webcrawler you start will create a DRb server if it doesn’t already exist and do the setup. Then, every subsequent webcrawler it’ll connect to the server and start picking URLs off the work queue.

So when you start a DRb server, you call start_server with a URI, then you start a RingServer. What a RingServer provides is a way from subsequent clients to find services provided by the server or other clients.

Next, we register a URL work queue and a URLs visited hash as services. The URL work queue is a TupleSpace. If you haven’t heard of TupleSpace, the easiest way to think of it is as like a bulletin board. Clients post items on there, and other clients can take them out. This is what we’ll use as a work queue of URLs to crawl.

The URLs visited is a Hash so we can check which URLs we’ve already visited. Ideally, we’d use the URL work queue, but DRb seems to only provide blocking calls for reading/taking from the TupleSpace. That doesn’t make sense, but I couldn’t find a call that day. Lemme know if I’m wrong.

Here is the guts of the crawler. It loops forever taking a url off the work queue using take(). It looks for a pattern in the TupleSpace, and finds the first one that matches. Then, we mark it as ‘visited’ in @urls_status. Then, we download the resource at the url and use Hpricot to parse it into a document then yield it. If we can’t download it for whatever reason, then we grab the next URL. Lastly, extract all the urls in the document and add it to the work queue TupleSpace. Then we do it again.

The private methods download_resource(), extract_urls(), and add_new_urls() are merely details, and I won’t go over it. But if you want to check it out, you can download the entire file. There are weaknesses to it that I haven’t solved, of course. If the first client goes down, everyone goes down. Also, there’s no central place to put the processing done by the clients. But like lazy textbook writers, I’ll say I’ll leave that as an exercise for the readers. snippet!

When I started writing some code recently, I noticed that my controllers were getting fat. There was much to do, but there was a bunch of stuff in there that didn’t have anything to do with actually carrying out the action–things like sending notifications. ActiveRecord already has observers to take action on certain callbacks. However, what I needed was to take actions on certain state transitions. Not seeing any immediate solutions in the Rails API, I decided to test myself and try writing one. I was bored too. So while I’m not sure if it was worth the time writing it, it certainly was kinda interesting. Here’s what I came up with:

Just as a contrived example, let’s say we are modeling the transmission of a car. It has three modes: “park”, “reverse”, “drive”. We want to send a notification when a user tries to change it from “reverse” to “drive”, but not when he tries to change it from “park” to “drive”. If it didn’t matter, and we just wanted to send notifications when the state changed to drive, we’d just use the observers that came with ActiveRecord. But since we do care where the state transition came from, here’s what I came up with:

So where’s the magic? It took a bit of digging around. There were two major things I had to do. I had to insert observers during initialization and I had to override setting of attributes to include an update to notify observers.

ActiveRecord doesn’t exactly allow you to override the constructor. I don’t think I tried too hard to mess around with it. Looking on the web, I happened upon has_many :through again, where he has some good tips that helped me through Rail’s rough edges. While I didn’t exactly follow his advice, I did find out about the call back, :after_initialize. It must be something new, because I don’t see it in the 2nd edition of the Rails book, and the current official API doesn’t list it. Other Rails API manuals seem to be more comprehensive, like RailsBrain and Rails Manual.

Then overridding attributes has always been a bit of a mystery. I found a listing of the attribute update semantics, which was helpful to figure out what I was looking for, but it was false, in that you can’t use the first one (article.attributes[:attr_name]=value) to set an attribute. Looking in the Rails code for 1.2.3, it shows that attributes is a read_only hash. But it’s right that you should override the second one (article.attr_name=value), since update_attribute() and update_attributes() depends on it.

I had a difficult time figuring out how to define methods for an instance of a class. The only thing I came up with was to use define_method, or to include a module with instance methods in them. instance_eval() didn’t work. The meta programming for ruby gets rather confusing when you’re doing it inside a method–it seems hard to keep track of which context you’re in.

So if you can make a use of this, great. If you think it’s worth moving it into a plugin, let me know that too. If you know of a better way, by all means, let me know. tip!

I never really ever find too many occasion to use ‘ensure’. It’s a Ruby keyword that you can use for blocks of code that ensures, no matter what happens, exceptions or not, the code will be run when the block is finished. And then a quickie that I found in the rails core:

It does something simple, just silences the warnings for a particular block of code. On first glance, I would have just written it without the ‘ensure’. However, that won’t work for yielded blocks that call return or if exceptions are thrown in it, I think. This way, no matter what happens in the block, it will always restore the state that it changed.