Semi-organized collection of thoughts, notes and anything else I feel like writing about

Menu

At times, I have to switch between a few different Heroku accounts. Apart from having to login again, the other annoyance is having the right SSH key active. If you don’t have the right SSH key active, (i.e. if the SSH auth agent has more than one key added to it, or if it has no keys at all), you’ll see errors that look like:

Your key with fingerprint: ... is not authorized to access <application-name>.
fatal: the remote end hung up unexpectedly

The proper solution for this involves configuring them in the SSH configuration files (~/.ssh/config or /etc/ssh/ssh_config). [See this article or ssh_config manpage for details]

But I’m loath to maintain all that configuration just for the sake of occasionally switching between accounts. Here’s what I usually do instead:

Clear any active identities (removing all ambiguity about which SSH key should be picked up for auth)

$ ssh-add -D

$ ssh-add -D

ssh-add key for the account

$ ssh-add ~/.ssh/an_account_key

$ ssh-add ~/.ssh/an_account_key

Push to Heroku

$ git push heroku-remote master

$ git push heroku-remote master

Of course, this assumes that the key is already associated with your Heroku account. If you haven’t, you can do that (after heroku login) with:

$ heroku keys:add ~/.ssh/an_account_key

$ heroku keys:add ~/.ssh/an_account_key

Note: On Linux, if you’re on GNOME, the gnome-keyring-daemon keeps adding keys back to the auth agent as you keep trying to remove them with ssh-add -D. So, it’ll look like the command is not working. The solution is to disable the damn thing (Google for it). I find the daemon annoying for the popups it keeps throwing at me, so personally, I’d be glad to see it gone.

While I use this technique mostly with Heroku, this is useful for any situations involving SSH and multiple keys. It’s useful when switching between SSH accounts, or switching between GitHub accounts (or other Git accounts), and in general, anything involving switching SSH keys for SSH auth. Of course, if you find yourself switching between the same set of accounts (keys) frequently, consider configuring them using ssh_config.

Fiddler usually works out of the box, with a few exceptions. One of those exceptions is capturing traffic from a JVM.

To capture plain HTTP traffic from a JVM, you can configure Fiddler as the proxy by setting these VM args:

-DproxySet=true
-DproxyHost=127.0.0.1
-DproxyPort=8888

[Note: Fiddler proxies at port 8888 by default]

Capturing HTTPS traffic (of course, to view it unencrypted in Fiddler), is slightly more involved. Here’s how to do that.

1. Export Fiddler’s Root Certificate

Click on Tools -> Fiddler Options… to open the Fiddler Options dialog.

Switch to the HTTPS tab, and click on Export Root Certificate to Desktop.

This will generate the file: FiddlerRoot.cer on your Desktop.

2. Create a JVM Keystore using this certificate

This step will require Administrator privileges (since keytool doesn’t seem to work without elevating privileges). So, open command prompt as Administrator, by right clicking on the Command Prompt icon, and clicking on Run as administrator.

Run the following command (replacing <JAVA_HOME> with absolute path to the JDK/JRE that you’re interested in capturing traffic from):

This will prompt you to enter a password. Remember the password, as it’s required for the next step.

Once a password is entered, this’ll create a file called FiddlerKeyStore. Remember the path to this file, as we’ll be using it in the next step. You can, of course, move it to a more convenient location and use that path.

3. Start the JVM with Fiddler as the proxy, and the Keystore you just created as a Trust Store

Essentially, we’re asking the JVM to use Fiddler as the proxy, and to trust the keys in the Keystore we just created. Here’re the VM args to configure your Keystore as the Trust Store:

Have you ever wished to have a finer control over your Heroku git repository? There’s a neat little Heroku plugin that gives you just that: heroku-repo.

Around the first release of my current project, things were happening in a frenzy. There were plenty of last minute fixes, and each of them would get built and pushed to our staging instance. On occasions, when people couldn’t wait for the build to finish (wasn’t a particularly long build though), or when the build was a bit flaky, they’d push directly to the Heroku staging repo. At some point, Heroku started rejecting our pushes, because it’s git tree had diverged from ours. It wasn’t nice, but we resorted to doing force pushes to the Heroku git repo from then on.

Once the release frenzy was over, I started investigating the issue. I started by trying to get a local clone of the Heroku repo, to see if I’d find something there, but it kept timing out. I opened a shell on the staging instance and tried searching for a repo somewhere there, but in vain. Heroku doesn’t host the repo from there. I had pretty much decided to blow up the staging app and recreate it, when I found this plugin.

It’s a plugin for the Heroku toolbelt that, in a sense, gives you raw access to the git repo that your Heroku application uses. Using this, I managed to download the git repo locally, which took quite a while because the repo had grown to some insane size (was a few hundred MBs, nearing a GB). Digging into the repo, I found a huge pack file that was causing all the issues. I suspected it to be because of a huge binary accidentally checked in by a team member, but my git-fu isn’t really good enough to say for sure. Running a gc on the repo using the plugin didn’t really help. So, I was still left with having to blow up and recreate the staging app.

I thought it’d be nice if I could just reset the Heroku git repo, and start over instead of having to recreate the app (and then add all the addons to it, which was a bit of a hassle since there were paid addons and I’d have to contact the instance owner to re-enable them). So, to figure out where the repo is hosted and how the plugin manages access it, I went through the plugin source code. Turns out, the repo is hosted on S3, and Heroku toolbelt exposes the S3 URL to it’s plugins. Better yet, the plugin itself had an undocumented command to reset and upload an empty git repo back to S3 🙂

This is pretty amazing. I can now start over with a clean repo in case my Heroku repo is messed up for any reason. On top of that, I can now deploy an entirely new app into my Heroku instance without leaving any dangling commits (not that it’s a common usecase, or even a useful one). See this protip for details: https://coderwall.com/p/okrlzg.

To install the plugin, do:

$ heroku plugins:install https://github.com/lstoll/heroku-repo.git

$ heroku plugins:install https://github.com/lstoll/heroku-repo.git

Here’s a few commands that I found useful:

Download the Git repo as an archive (useful when you can’t clone from Heroku)

$ heroku repo:download -a appname

$ heroku repo:download -a appname

GC the repo (on Heroku)

$ heroku repo:gc -a appname

$ heroku repo:gc -a appname

Reset the repo and upload an empty repo

$ heroku repo:reset -a appname

$ heroku repo:reset -a appname

The plugin has a few more useful commands. Do check it out on GitHub: https://github.com/lstoll/heroku-repo. Also, I’d recommend going through it’s source code to see how it works. I thought it was pretty neat.

I was trying to upgrade my Ubuntu installation to 12.10 (Quantal Quetzal), and the update manager (Muon) kept failing with error messages like:

dpkg: warning: 'ldconfig' not found in PATH or not executable.
dpkg: warning: 'start-stop-daemon' not found in PATH or not executable.
dpkg: error: 2 expected programs not found in PATH or not executable.
Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin.

which ldconfig as root returned /sbin/ldconfig, and of course, root‘s $PATH had /sbin in it, so, couldn’t think of a reason why the updates were failing. But a bit a googling led me to the sudoers file (/etc/sudoers)

Muon, and in turn apt-get, use sudo for installing stuff. And sudo starts with an empty/default ENV, if it’s either been compiled with –with-secure-path, or if env_reset has been set in the sudoers file. In my case, env_reset was set in the sudoers file, so, sudo‘s ENV didn’t have /sbin in it. In case of env_reset, you should provide a secure_path, which’s the $PATH that any sudoed process would use. So, after the fix, my /etc/sudoers looks like:

In case you’re seeing the error, but don’t have env_reset set in your sudoers file, then it’s likely that your version of sudo was compiled with -with-secure-path. To see the options that your version of sudo was compiled with:

I had to re-install rvm on my MacBook because my gemsets were a bit messed up, and I thought I should start over with a clean install. I just rm -rf‘ed ~/.rvm, and then went ahead and re-installed it according to the instructions at https://rvm.io/. Installed ruby 1.9.3, and the openssl package as described here. But after that, both gem install and bundle install started failing because the SSL certificate from https://rubygems.org couldn’t be verified. Couldn’t quite figure out what was wrong. But, after a bit of googling, found out a way to skip the SSL certificate checks.

To skip the SSL certificate checks, just add this line to your .gemrc

:ssl_verify_mode: 0

This causes the gem and bundle commands to skip SSL certificate verifications when fetching them from a HTTPS source.

Of course, you can also bypass the error by using a non HTTPS URLs for your gem sources in your Gemfile (when using bundler). So, something like:

source 'https://rubygems.org'

in your Gemfile, will become:

source 'http://rubygems.org'

Neither of these actually fix the problem. They just avoid SSL certificate checks, or use a non SSL source. I still don’t know what went wrong during the re-install.

I just set-up my new MacBook Pro, running OSX Lion (10.7.3), for Rails development. Installed rvm, installed Ruby 1.9.3 using rvm and did a gem install rails -v 3.2.3. Everything went fine, until I tried to create a new rails app.

rails new <app_name> would create the app structure, but would then fail with a segfault in http.rb. Here’s what the stacktrace looked like:
...
run bundle install
/Users/CodeMangler/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:799: [BUG] Segmentation fault
ruby 1.9.3p125 (2012-02-16 revision 34643) [x86_64-darwin11.3.0]

Do that, and you should be all set. I tried creating a new rails app after that, and it went through just fine. Also, from googling, it appears that if it’s a segfault in http.rb, it’s most likely due to OpenSSL.

My job for the past few years primarily involved developing eclipse based applications, and so, I’ve had to use eclipse full-time for quite some time. My pet peeve with eclipse has always been it’s usability. It’s not particularly keyboard friendly (not the same as having a gazillion keyboard shortcuts, which eclipse has plenty of), and the UI is just plain clunky. One thing that particularly annoyed me at some point was, not being able to conveniently switch between open tabs (editors, views), like you can with most other IDEs and text editors. Before you say it, I know there’s a Ctrl+F6 that brings up a quick switcher for editor tabs, but come on, it’s not even close to what I’d like it to be. And no, reassigning it to Ctrl+Tab just doesn’t cut it.

So, I went ahead and wrote tab switcher for eclipse sometime last year. Never really released it though, since there was always that one little feature I could add before it was “done”, and I never had the time for it. Regardless, I’ve been using it for some time, and it works just the way I like it. If you ever missed having a decent tab switcher for eclipse, try this one out.

Screenshot

Tabby Screenshot

Default Key Bindings

Ctrl+Tab – Cycles through the list of open editors and views

Ctrl+Shift+Tab – Cycles through the list of open editors and views, in reverse

Esc – Switches focus to the last active editor. Useful when you’re navigating through views and would like to quickly get back to editing.

This was another interesting problem at Project Euler (Problem 12). Interesting because the naïve solution to this was all too trivial but slow, which forced me to seek out a better approach and I finally ended up learning something new 🙂

The nth triangle number is defined as the sum of all natural numbers till n. Well, that’s definitely trivial to calculate. It’s basically the sum of first n natural numbers and can be calculated using the well known formula:

So, all that remains is to calculate the divisors and we all know how to do that right? Just count the numbers from 2 to half (or square root, if you prefer) the triangle number that divide the triangle number. So, here’s the code I started with:

The idea is simple actually. Take 5050 for example (the 100th triangle number). It’s prime factors are: 2, 5, 5 and 101. Now reduce the list to a list of unique numbers with their repetition represented in their exponents. So, the list 2, 5, 5, 101 becomes: 21, 52,1011. Now, add 1 to every exponent and multiply the resulting numbers. So, that would give us: (1+1) * (2 + 1) * (1 + 1) = 12. That’s the number of divisors that 5050 has. And, of course, that can be verified using the divisors method:

Since I’d already written a function to find the prime factors to solve Problem 3, solving this one was just a matter of writing code to search through a list of triangle numbers to find the first one with more than 500 divisors. Here’s the final code I ended up with:

My current project is a UML modeling tool, and I’m working on the C# 3.0 bits. One of the things it lets you do is to Reverse Engineer your code and obtain the model. And that includes reversing the assemblies too.

C# 3.0 introduced Auto Properties (or Automatically Implemented Properties if you like the wordy versions 😀 ), and the tool lets you model them. To maintain the symmetry, reverse engineering should detect and mark the Auto Properties appropriately. This’s easy in case of a source file since the rule for identifying an Auto Property is pretty simple. It’s a property contained in a concrete type (non-abstract class/struct) which has both it’s accessors without their implementing bodies. Simple, eh?

But it gets a wee bit more complicated to do that once you compile it. Because, the compiler generates a body for the Auto Property accessors. So, the last bit about the accessors not having an implementing body fails. Essentially, it seems like there’s no way to distingush an Auto Property from a regular one once it’s been compiled. And in fact, you shouldn’t have to because that’s the whole idea! Auto Properties help reduce some clutter in code and possibly save you a few keystrokes and let the compiler do the gruntwork. So, if you’re trying to distinguish an Auto Property, think again. You probably don’t have to.

Anyways, I still wanted to see if there’s a way to do it and the first thing I did was to Google for it. Since that didn’t turn up anything useful, I decided to figure it out myself and fired up Reflector. Long story (relative to a blog post, that is) short, I figured it out and decided to write this post to fill up a gaping hole in the public knowledge base 😀

The C# compiler decorates all the stuff it generates with the CompilerGeneratedAttribute. Even vbc.exe & vjc.exe should do the same, but I haven’t checked. Since the accessors of an Auto Property are generated, they’re decorated too 🙂 Therefore, the rule for identifying an Auto Property after it’s been compiled is: It’s a property which has both the accessors, both of which have been decorated with the CompilerGeneratedAttribute. Simple isn’t it?

Ironically, after doing all these, I realized there’s not much sense in distinguishing them anyway. I mean, they are your regular properties for most of the purposes. Moreover, in my case, the the information derived from the assembly is used only to let the user extend the existing types. i.e. create generalizations and such.. So, even the existing implementation doesn’t bother showing the assembly types in much detail. They’re mostly limited to the high level details like classes, interfaces and such..

Ultimately, it wasn’t really useful, but was interesting nonetheless 😀

Subtext is a visual programming language where you express the application logic in a tabular fashion. I’m not gonna try to explain Subtext in this post. You can find out more from its homepage: http://subtextual.org/.

I learned about Subtext nearly 2 months back. It sounded interesting, but all I could find on the site were 2 screencasts and a bunch of papers. Apparently, it’s still a research project and the prototype used in the screencast isn’t publicly available yet. I was a little disappointed, and almost forgot about it until today.