Tag: code

Think about the way you think. Think about that thought, and this one. Did you think using words? Did you see the words? Sound them out mentally? If someone asked you describe yourself, you would probably think of a series of adjectives (at least if you’re an English speaker).

We think via language, spoken or written. It’s the source of our intelligence and in some ways the root of our consciousness. Helen Keller is quoted with communicating that:

When I learned the meaning of ‘I’ and ‘me’ and found that I was something, I began to think. Then consciousness first existed for me

The languages you learn are the languages you express yourself with. They mold the way that you think about things and create who you are within your own mind. I’ve written about this before and it’s not an entirely new concept.

Recently I’ve gone to great lengths to change the way I think. Finding new ways to solve problems, especially software problems, often involves learning new languages, syntaxes or paradigms. You can force Java or C to do just about anything, just as you can force the English language to describe just about anything, but it might be that by using Java instead of Haskell, you’re using the wrong tool for the job.

I wanted to expose myself to a breadth of different software paradigms in as little time as possible. Rather than reading dozens of tutorials, or poring through hundreds of pages of reference manuals to get maximum exposure, I bought a book I called Seven Languages in Seven Weeks. Packed into this dense little tome is an overview of seven syntaxes from different families and programming paradigms.

The book begins with Ruby. It’s a fairly common syntax and I considered skipping this chapter. Indeed, with the relative ubiquity of the language I wondered why it had been included at all. In the spirit of playing along with the author I read through the sections and did the exercises as described. It turned out to be a good idea; some of the concepts around using method_missing as a DSL generator I had never put into practice.

From a comfort standpoint starting with a language you’re familiar with is also a bit like reading the introduction to a Latin grammar text book in English. I know the language and therefore the author can present his approach to me with words I can understand before I try to make my way through the rest of his presentation.

Speaking of presentation, Tate clearly has a grasp of basic pedagogy. From the beginning he uses a mneumonic device to help the reader put a face to the chapter and the methodology. For Tate, every language is like a character in a movie. They have their own personalities; something that makes them unique within the dozens of lexical environments out there. For Ruby it’s Mary Poppins. You know, syntactical sugar. Get it.

After Ruby Tate introduces a language I had never heard of, Io. Just try searching for information about this little language on the web. You won’t exactly find the throbbing community that surrounds java or ruby to back you up. No, if you choose to use Io to solve something new, you’ll likely find yourself in uncharted terrority. Not necessarily a bad thing if your approach to the text is to learn new ways to think.

A prototype language, Io is described by Tate as Ferris Bueller. In use I got the distinct impression that Io was heavily influenced by Smalltalk; everything you send is a message, and their are nothing but senders and receivers of messages. Method or function? Not really, there are ‘slots’ with message handlers. Can they be construed as the same thing, abstractly yes, but that is to avoid thinking in a way that make the language unique. Sending messages between objects is a powerful concept, and will help you better understand Objective-C and Smalltalk.

Using Io feels a bit like working in JavaScript, the only other prototype language I have any experience with. The concurrency framework is dead simple and provides the reader with a taste of things to come from languages like Scala and Closure. In fact, the actor framework in Io is so simple and impressive it feels like a great environment in which to teach concepts of asynchronous behavior and concurrent development.

After Io we get to Prolog, the most frustrating language paradigm for me to grasp in the book. Tate says Prolog is like Rainman. That must make me Tom Cruise.

The logic programming paradigm was at once the most fascinating and frustrating for me to study. At first I was enthralled. A language that I can plug values into and simply query against to get the answer like a super-powered database? Sign me up. I immediately found myself fighting the syntax. It took me some time to grasp the recursive nature of the language as well; no looping structures.

Solving the sudoku problem at the end is the best example of the power of Prolog and languages like it. Reducing a game to a couple of line of syntax, injecting the rules and simply asking questions is a beautiful way to solve many of the problems modern engineers are presented with …if Rainman doesnt drive you nuts along the way.

With Scala we take a detour back to familiar territory. Scala is the first variant on the java language I’d had the opportunity to use, so when I began the chapter I had some exposure to it. Most of the concepts in this language sunk right in.

Tate says we can think of Scala as Edward Scissorhands. He is the construction of spare parts and a lot of paradigms that already exist. I prefer to think of Scala as MacGuyver; It can do pretty much anything in a pinch. Scala was a comfortable environment to take a break in for a while. It sports functional programming paradigms like higher-order functions, while retaining many imperative concepts held over from C-based languages. Its also completely interoperable with Java, so all of those libraries we’ve grown attached to like joda and jsyn can be reused in the same lexical environment.

For concurrency Scala provides an actor system, much like Io. Tate clearly planned the book to address concurrency in a methodical way, first by introducing simple examples with Io, then advancing to Scala before diving headlong into the deep waters of Erlang and Clojure.

Things get uncomfortable again as Tate introduces erlang. From the get go Erlang baffled me, and when it was revealed that it was modeled after Prolog, I understood why. The only language compared to an antagonist in a movie, Tate describes Erlang as Agent Smith from the Matrix. Tate says that this is due to the self-replicating capabilities of Agent Smith in relation to the fail-safes built into Erlang, allowing the user to build highly fault tolerant concurrent systems that “just won’t die”. I think it’s because Erlang is evil.

Erlang is clearly very powerful, so as with Prolog I struggled through the examples and problem sets. I still don’t feel like I fully grasp how to do anything useful with it. Of the languages in the book, I feel like this is the one I need to spend the most time with to really understand.

Next we get a lisp. Clojure, a language fully compatible with the jvm is a lisp not at all unlike Scheme, minus a few parentheses. For Tate this language is like Yoda, no doubt due to the “reverse” notation of the arguments and the “inside-outness” of the code construction, at least compared to C.

Surprisingly, I took right to it. Of the new lexical environments this felt the most comfortable, but then, I’ve played with emacs a bit. The concurrency framework is not at all unlike scala with some notable additions. The concept of STM was awkward at first, but after fiddling with it for a while I was comfortable producing usable code.

The interoperability with Java is another major benefit to using Clojure. For Dijkstra’s Sleeping Barber problem, rather than struggle through writing a queue from scratch, I just borrowed the existing Java LinkedBlockingQueue, cranking up one actor to poll it, and another to deliver to it:

In just under 1000 parentheses the barbershop problem was solved. The wrapper around it is unnecessary, but then the whole solution is a little bit wordy for Clojure.

Impressed as I was with Clojure, it was time to study the final language in the book, Haskell. I originally selected the book based on the inclusion of Haskell. For some time now I’ve wanted to take a crack at this pure functional, almost entirely mathematical language.

Compared with the ever logical Spock, I’m still dazzled by Haskell. Having read the chapter and gone through the exercises, I feel like I’ve only scratched the surface of what it can do. It’s unofficial tag is that it makes hard things easy, and easy things hard. That ain’t no lie. Try reading a file with it. Do something simple, like open a socket. I feels like pulling teeth. Now go write a Fibonacci solver. Chances are you’ll have cooked up something that can’t be done as succinctly or quickly as Haskell can do it. Of all the languages in the book, this is the one I intend to dig the deepest with.

Wrap-up

When learning anything, I generally feel like a breadth-first overview is the best method of getting started. When learning new ways to think, this breadth-first search seems even more important. Get all your options on the table, see what’s been discovered before deciding how to tackle the problem. Selecting a strategy to go deeper with is a decision that can always be deferred until you know what your strategies are (Of course, you can only defer for so long before you just need to make a damned decision based on what you already know).

The real value of Seven Languages is that it provides this kind of breadth-first overview. You may know Java or C already. That’s great. What else is out there? What can a language like Io make easy? Clojure will help you understand lisp. Haskell will help you understand any functional and improve your understanding of modern math. Scala will let you build damn near anything.

Tate’s progression makes a lot of sense as well. If I was creating a curriculum to prepare a developer for the real world, I would start a youngster out with something like Ruby. This is an obvious ramp into Java and C. Then I might introduce something like Io to explain prototype languages and concurrency in a simple way. This is an step towards a better understanding of both Javascript and Objective-C. Then I might start them on Scala. It’s maximum exposure to as many concepts as possible. From scala, Learning a functional language is made easier if the programmer has been using Scala’s higher-order stuff like fold and map, and is used to immutable variables. Tate’s text provides a decent way to do all of this, introducing a young developer not only to the syntaxes but to paradigms that are broad enough provide insight to damn near any language out there.

If you’re interested in seeing my solutions to the exercises and problem sets, you can find them here. I learned a lot along the way, and I think I achieved the goal I had set out to achieve: Learning new ways to think.

How about a basic example. When a user logs in, set a bit in a bitset at the location of that user’s ID number. If you have a bitset allocated for each day, you can tell for any given day how many users logged in by looking at the cardinality of the bitset. Want to see if a particular user logged in on a particular day? Just check the location in the bitset for that user’s ID for the day in question for a 1 value. You can also perform more advanced logging, taking the union of multiple sets, or the disunion, to determine various statistics.

The theory behind it is simple and sound. It’s faster than hitting an RDBMS for values that are binary in nature, and the ability to apply basic set theory to your bitsets to analyze your metrics is quite powerful.

I began to use this method and the code examples on the Spool blog to create metrics in a variety of systems, not to mention create silly stuff like prime number tables. It only took a few implementations to realize that the code examples, taken at face value, don’t really work.

The Problem with BitSet.valueOf() and BitSet.toByteArray()

The heart of the problem lies in the output of Java’s default BitSet.valueOf() method. Here is one of the examples on the page:

If you use the Jedis setbit method to set all of your individual bits, then read the entire set out with BitSet.valueOf(), Java reads the bytes as though they were right to left, whereas Redis stores the values in a straight line. Left to right. The bit sex, as it is called, is reversed in this case, and you can’t possibly get an accurate bitset out of Redis if you retrieve it and convert it using plain ol’ BitSet.valueOf(). You have to have a ‘tweener method to flip the bit sex for you.

You might also think, though it isn’t in the examples, that simply performing a BitSet.toByteArray() would create a byte array appropriate for storage in Redis to be read back via redis.getbit(); Not so. Java uses its native byte order for each call. This confuses things greatly, because if you set the bitset using BitSet.toByteArray() and read it back using BitSet.valueOf(), everything looks correct. Try to read a bit out of this array and be prepared for a surprise.

And you can see the gist I created to read bytes out and put bytes in, retaining the integrity both ways, and show how things do and don’t work:

You can run that locally to get something like a code story…

So, whether or not the guys over at Spool have an unfortunately named helper method that looks exactly like the native Java one, or use some other methodology to maintain bit order in their bitsets I can’t say. It goes without saying that you should check and double-check any ol’ method you pull off the streets.

Ruby you’re great. No really, I love your terse syntax, iterating is easy, and the community that supports you is quite large. But, I think we need to take a break.

Wait, don’t cry. Let me explain.

It’s just that I’m tired of having to remember so many syntaxes, especially one so different than the others I work with. I have to use C# or Java for my enterprisey stuff, then switch to Javascript for client-side, then switch to whatever templating engine I’m using. It gets…confusing. I caught myself writing a for loop in a file that ended with rb. Seriously.

What’s that? How will I write server-side scripts?

Well, I’ve thought about it and I think Node.js and I are going to start a relationship. Don’t be like that, Ruby. Try to understand. Node is supported on all the platforms I use. I can write scripts in javascript. It’s familiar.

Node and I had our first date last night. I was looking at a Project Euler problem and after working out something that made sense on paper, I glanced over at node and said “Let’s do it”.

We started going at it. Things were looking great at the start but then the night got rocky. My solution on paper just wasn’t working out in code. I wrote and rewrote but just couldn’t make anything work with Node. To be fair Star Trek was playing in the background and my wife was working on her latest project in the same room. The way Spock says “sensors” and the grinding sound of eggshells on sandpaper didn’t really set the mood for solving any problems.

I smiled at node. “I’ll, uh, call you in the morning,” I said, and went to bed.

The next morning I took a long walk with my dogs and thought about what had transpired the night before. Within minutes I had the solution worked out in my head, and I realized it wasn’t node’s fault the night went sour, it was mine. I just needed to sleep on it.

I rushed back to the house, cracked open emacs and tried again with node. It was instant harmony. Here is the brute force solution to problem #3 on Project Euler:

So you see, you’ve been a fun fling Ruby, and we may get together again someday. You know how fickle I am with programming languages. Let’s just take some time off and see where it goes. Node and I may have something here.

“Here’s a simple one” remarked the young man as he drew a diagram on the board. “You should have this figured out in about five minutes.”

I stared at the pyramid of numbers scribbled hastily in blue dry erase. The only thing clear to me in that moment was that I wouldn’t have this figured out in five minutes. Or even ten.

“Any number in the pyramid, or in this case tree array, is the sum of the two numbers above it,” the young man continued, “write a function that gets the value of any x, y coordinate in this structure.”

Hearing it out loud didn’t improve my confidence much. I stood in front of the white board with the dry erase marker hanging slack in my hand. It occured to me that five minutes had probably already passed. I began to draw lines on the board connecting various nodes. I drew the coordinates of array values at various locations, hoping an answer would pop out at me. I stood back and chewed on the marker a little before realizing what I was doing. I was dumbfounded. I was stumped.

I gave up.

I walked away from the board that day, defeated. It was the first blow my ego had experienced in a long time, and over such a simple problem, one that I apparently should have solved “in about five minutes”. I began to question my ability as a software engineer. I began to question whether all the software I’d written over the past half-decade was worthless junk. Maybe I’m just not l33t? Maybe if I can’t even solve a problem so seemingly simple in “about five minutes”, I should just hang up my hat?

The problem hung over my head for the next week. One night, as I laid in an uncomfortable Chicago bed, I thought long and hard about what happened that day. I couldn’t believe that I gave up. I never give up. I walked away without getting to a solution. Over the past week I hadn’t even tried to solve the problem. I slumped around feeling sorry for myself and my apparent lack of skill as an engineer. That night, somewhere between waking and dreaming, I stopped the pity party and started figuring the damn thing out.

The mental image of that numerical pyramid that had haunted my thoughts over the past week was top of mind as I woke in the morning. I started to trace through it as I showered, creating further iterations. Adding values at the end in an effort to find a pattern somewhere. I grabbed my iPad and stylus and drew it out again, adding array notation values as I had drawn on the board.

I stopped thinking about how long it was taking me to solve the problem, and focused on this pyramid of numbers. I still couldn’t find a pattern in the values, nothing that said to me that for any x, y coordinate I could simply subtract or add a number here or there and have a solution.

Not too long afterwards Cary awoke and we walked to a local coffee shop (Intelligentsia on Randolph St, highly recommended).

“Whatcha working on” Cary asked.

I described the problem to her.

“It sounds like something that has no practical application in the real world.”

True or not, It dominated my mental world, and I had to solve it before I went crazy. I started to use Cary as a sounding board.

“I should just start writing down the rules that I know.” I suggested out loud.

“Yeah, treat it like a logic puzzle”

So I did. Here were the rules I came up with:

1. 0, 0 = 1
2. 1, 0 = 1
3. 1, 1 = 1
4. if x is equal to y, then the value is 1
5. if y is equal to 0, then the value is 1

That looked about right. I focused on these rules instead of a pyramid. I realized that rule 4 is really the same as rule 1 and 3. 0 is equal to 0, and 1 is equal to 1. I also realized that rule 2 is the same as rule 5. In the end, I really only had 2 rules:

1. if x is equal to y, then the value is 1
2. if y is equal to 0, then the value is 1

I distilled further.

1. if x is equal to y, then the value is 1
2. if x or y is less than 0, or y is less than x than the value is 0.

“I’m an idiot” I groaned, “It’s a simple recursive function.”

All of the values could be derived from two simple rules, their parent nodes derived from their parent’s node and so on. If I wanted to know the value of any number in the array, it was simply a matter of performing the check again, using the same function, on the left and right hand side of the expression, all the way back to 0, 0.

Let’s look at it more closely:

The value of any x, y coordinate in the system is the sum of two other values in the coordinate system, x-1, y-1 and x-1, y. For example, to get the value of the integer at location 4, 2:

Given that these two values are themselves the sum of their “parent” nodes, this same function can be called with their coordinates plugged in for x and y, and those values summed, all the way back to the value of 0, 0. If I pick an x or y coordinate that is less than 0, or if I pick a value pair like 5 and 6, then I’m going to get a 0 back. Everything else should return a 1 or the sum of the parent nodes.

Here was the simple 3 line ruby function I wrote to accommodate this:

It was that simple.

This was an important reminder that sometimes you just have to walk away from a problem, shake your hands out, talk to your wife over coffee, and relax before approaching the problem again. It restored my confidence to go back and solve this little problem. My ego had taken a real blow, and the thought of giving up on it was driving me bat shit crazy. Its a lot like riding a bike or doing backflips. If you screw up and land on your face, you have to immediately get up and do ten more to avoid the psychological after effects of that perceived failure.

To get back on the bike, I plan on doing several more of these over the next few months. Depending on how interesting they are, I’ll report them here.

Update:Victor Nicollet has educated me in the way of Pascal’s Triangle, the diagram featured above. His solution to the problem and accompanying blog post are well worth the time to read and understand. Thanks Victor.

A language that doesn’t affect the way you think about programming is not worth knowing.

Alan Perlis

After a few minutes of thinking about it, I changed the quote to read:

A language that doesn’t change the way you think is not worth knowing.

I don’t think in Ruby. Or Perl. Or Python. I certainly don’t think in any of the C derivatives. I think in English. Even when I program I put a series of nouns, verbs and adjectives together to form sentences that the computer will be able to interpret and compile into byte code.

What does it mean to be an English thinker? What effect does the language and it’s syntax have on me? Does English have more strong verbs than Eastern languages? Does that have some effect on my thought processes compared to say, a native Japanese speaker? What about other languages, programming or otherwise? If I learned Italian well enough to think in it, would it fundamentally change the way I think?

What about structure? Does the structure of a human language change the way we think like the structure of a programming language does? Perhaps I find that I can more eloquently express myself in Manadarin than in English on the topic of art, or that I prefer the conciseness of German when trying to explain mathematical concepts. What about taking human languages and learning them to change the way I think about a subject, just as I would mix and match programming languages depending on their utility.

Could I boil a human language down to the extent that I could say with confidence “The best language for describing training exercises to an athlete is Russian”?

I have a proposition: Learn a new spoken language. Learn it well enough to think in that language. Understand and document the way that thinking in the new language changes the way you think. Perhaps its looking at verb order, and what verb order means to people who reverse verbs and their objects? Perhaps its looking at the use of adjectives, and what it does to the way you mentally describe things?

I could see this turning into a study of some sort.

When programming, you’re looking for a balance of conciseness, eloquency and maintainablity when you choose your “words”. If we could do the same with human language, putting a different language into use dependent on the situation, could we maximize our potential as thinkers and by proxy, communicators?

A while back I created a simple command line tool that allowed me to create tasks from Launchy and send them directly to Toodledo. My process was simple. I’d create a bunch of tasks throughout the day while doing other stuff, then sometime that night (or the next morning) I’d go through all those tasks and put them in the right container, assign due dates, and make projects out of them if necessary. This works great for tasks that can wait a day.

But what about tasks that can’t wait a day?

I realized that what I needed was a way to add a due date of ‘today’ inline with the task. I played with the code and in about 10 minutes or so I had the feature added.

That was easy, why not take it further?

So I did. Todo-CL has a slew of new options for creating tasks from the command line, or in my case, from Launchy. Here is a snapshot of the README file, which includes the new context switches for adding tasks on the fly:

I use Launchy hundreds of times in a day. The Alt and Spacebar keys are usually the first to show significant wear and tear on any new computer I work with. It’s the first thing I install, and it’s how I keep my hands on the keyboard and off the mouse. I also use a web service called Toodledo for task, todo, and personal project management. Toodledo sounds like a lame children’s toy, but it is extremely good at what it does: being an ugly but efficient task management system.

Toodledo works via the web and iPhone, but what I really wanted was something that would allow me to add tasks via the Launchy window as I thought of them; in meetings, on phone calls, while writing, etc. This just needed to be a rapid fire todo creator: hit Alt+Spacebar for the Launchy window, then todo, tab, and my task. You would think that someone would have written a very simple windows native client to quickly add todo’s to the Toodledo service, but when I went looking the closest thing I found was a ruby client. Not exactly native. I pulled together a few projects that already existed out there and created my own little command line todo client that I could run from Launchy.

This was of course many months ago.

I’ve become a regular user of git and github lately. For open source projects, code snippets, and in general sharing text based files, github has been by far the most mature attempt I have had the pleasure of using. I figured if I was going to share this little snippet of code with the world at large, I may as well explore the git and github paradigm.

First, there is git as version control system. That’s simple enough: setup a repository and check in some code. Modify it locally, commit it, push it. Nothing special here. I created a repo for my little todo client, which I’m calling todo-cl, and checked in my code here.

To check out the merge features I created another local source tree and mucked with the code, then pulled it back into my original branch. No surprises here either. Let’s create a branch.

Whoa.

I think I created and switched between 3 or 4 branches, testing modifications and merges, in less than 5 minutes. In a command line window. Not in an IDE. This is really freaking cool. In most of the version control systems I’ve used, branching is by far the most problematic feature to work with. It usually means creating different directories with different versions of the source tree, taking up disk space, forcing me to navigate around and make sure that I’m making my changes in the right directory. It’s a pain in the ass. With git, I get to work in the same directory if I want, the amount of time it takes to switch branches is the time it takes to run a simple command, and I can create several branches without doubling and tripling the amount of disk space in use. Sure, hard drives are cheap, but you can’t put a price on time, and branching in git is a huge time saver.

From a version control standpoint, git is simple as hell, with a few powerful features that set it apart. This is where github takes over. Obviously github is a git server. It stores your repository, displays it via the web, and allows others to search is and see what you’ve done. Very cool.

But what else can github do?

For starters I can fork any public repository. I’ve already done this with the RWMidi project. If I decide that I want to make some changes to the Rails framework, the Linux kernel , or Apache’s HTTP server, I can do that. I can choose to go off on my own and continue working on my little fork, or I can issue a pull request and let the folks maintaining the main releases know that I’ve made a swanky fix that others might be interested in. This isn’t all that new, github has just made it extremely easy. Github has also made it personal. If someone wants to use my version of the Linux kernel instead of say, Linus Torvald’s version, they can. Really though…keep using official Linux kernel. My todo-cl repo can be forked too.

What are pages?

You can create a branch on any repository called gh-pages, load it up with a complete html website, and github will serve the contents. This can become the main page for the software hosted on that repository. Indeed, github has taken this a step further, and allowed custom domains for these pages, so that github can become the replacement for your shared hosting. Wanna keep going? Create a repository for your personal blog, load it up with html pages, and github will gladly serve that as well, custom domain and all. I created a page for todo-cl here.

But wait, there’s more.

There is Jekyll. Jekyll is a bit like a WordPress for github, a framework that allows you to create something like a blog in a repository. Throw some posts in a folder, and with a little presentation magic github serves it up. I doubt I’ll be switching to it anytime soon, but I find the concept intriguing and plan on keeping my eye on it.

So github is a bit like Facebook for hackers, without all the Mafia Wars and Farmville requests. Exploring the world of code waiting out there ready to be “forked with” is a little overwhelming, but to know there are so many folks out there making so many things with software is comforting. Coming from a guy who sees making machines do things as an end in an of itself, there is a lot of ends to explore here.

So check out todo-cl if you want to use enter todo’s from the command line or Launchy. Check out github if you want play with code.

With just a little trepidation I have checked in the code for all of my free software goodies into GitHub. I was getting several requests to provide access to the source for several of my old projects, so rather than emailing code around on a case by case basis, I have simply checked everything in to GitHub for posterity. Who knows, maybe someone will jump in there and fix all the bugs.

I have written some other tools for my personal use that I will likely check in there as well, so check back if you’re interested in that kind of thing.

In my previous post, I discovered that the RWMidi library was available via GitHub, leaving it open to the possibility of forking and making my sync and pulse resolution changes public. Proving that Jung and Sting were on to something, while making these changes I received an e-mail from a user struggling with RWMidi:

There are no exposed methods to send pitch bend in RWMidi. So, newly empowered with forked code from GitHub, I created a method in the MidiOutput class that would allow a user to do this. I did a little research on the format for pitch bend method and came up with this:

Pitch Bend
The pitch bend wheel is also a continuous controller on many MIDI keyboards. Possibly, the Pitch Bend message is in its own category because it is something likely to be done frequently. The two bytes of the pitch bend message form a 14 bit number, 0 to 16383. The value 8192 (sent, LSB first, as 0x00 0x40), is centered, or “no pitch bend.” The value 0 (0x00 0x00) means, “bend as low as possible,” and, similarly, 16383 (0x7F 0x7F) is to “bend as high as possible.” The exact range of the pitch bend is specific to the synthesizer.

Crudely translated this simply means that pitch bends are in the range of 0 to 16383, with 8192 meaning “don’t bend”. Bending up means sending a value higher than 8192, bending lower means sending a value lower.

The format is a little funky and deserves some explanation. Essentially rather than sending two 8-bit bytes to represent the 16-bit value, I have to send two 8-bit bytes representing a 14-bit value, which will be interpreted as a number between 0 and 16383. So, in order to send the correct bytes I have to take my original value and rescale the size of each byte from 8 to 7 bits.

For the plain-jane 16-bit number 4000, the two 8-bit bytes would look like:

8

7

6

5

4

3

2

1

LSB

1

0

1

0

0

0

0

0

MSB

0

0

0

0

1

1

1

1

The 14-bit value that I need to send would be:

8

7

6

5

4

3

2

1

LSB

0

0

1

0

0

0

0

0

MSB

0

0

0

1

1

1

1

1

When visualized like this, it becomes clear that what we need to do is shift the last bit in the least significant byte into the first bit of the most significant byte, thus shifting the entire MSB over a place to the left. The way I have to think about this is to consider the two 8-bit bytes as zero-padded 7-bit bytes, and understand that when the receiving device gets the message, it will trim the last bit of each byte and put the registers together to form one 14-bit value.

Doing this is somewhat straightforward. I take the modulo of the pitch bend value and the greatest possible value I can represent with a 7-bit byte, which is 128, to get the LSB. From there I can simply divide the pitch bend value by 128 to get the binary value of the MSB. To send a pitch bend of 4000, I would send the numerical value 32 for the LSB, and 31 for the MSB. When you strip the first zero of each of these bytes, then string the registers together into one 14-bit number, you get 111110100000. This isn’t the most efficient way of doing this of course, I could shift some bits around, masking off values, etc. But I didn’t. The method I created is pretty straightforward:

There is some error checking here to make sure we don’t try to send a stupid value. Obviously if the max is 16383, there is no reason to allow 16384, not to mention a negative number. To test this I created a little sketch that looks a heck of a lot like a pitch bend slider, using the mouse’s X-position to control the amount of bend. I mapped the value of the 800 pixel width box to match our minimum and maximum pitch bend values, 0 and 16383 respectively:

And there you have it. The code is checked in to my fork of the RWMidi library and a pull request has been issued. I checked in a pre-compiled library with this functionality added as well.

Creating a pitch bend slider on the screen is small potatoes with functionality like this. My sketch should only serve to get you started. You should think about mapping other more exotic pitch bend controls to this method. Perhaps a sine wave generator, allowing you to change the pitch somewhat like an LFO would. Perhaps you can make the pitch “wobble” when the mouse tracks over some position on the screen? That old MIDI keyboard on your desk can already bend the pitch with a slider, how can you take this functionality and make something unique?

Many moons ago, I created a little tool called the GOL Sequencer Bank. You can read more about it here, and here. In order to create the tool, I used RWMidi, a Java/Processing library created by Manuel Odendahl of Ruin&Wesen. While creating the sequencer bank, I discovered that the RWMidi library had no support for MIDI Sync messages, preventing me from syncing the sequencer with a master, like Ableton Live. This simply would not do.

In the past, I would have looked for another library, but given that I had the source readily available, and had already written a ton of code interfacing to RWMidi, I decided it would be a better use of my time to modify the RWMidi library to support sync messages. You can read more about that here. The changes were minimal, and I learned a lot in the process about MIDI, Java, and the art of modifying open source.

Not too long after that word got out that I had made this change to the RWMidi library, and I started getting one-off requests to send my modified library to folks for their own use. For instance, John Keston over at AudioCookbook built his GMS synth using my modified library.

A short time after that, Mr. Keston approached me with what I thought at the time was a strange requirement: modify the library to support greater than 24PPQ for recognizing 64th and 128th note resolutions. In plain English, John wanted the GMS to support 64 and 128 notes using plain-jane MIDI clock. I thought it couldn’t be done, but loved the challenge, and modified the RWMidi library accordingly. It was a doozy.

These modifications to the RWMidi library have only been available as a custom change to the GMS and the GOL Sequencer Bank, but now, through the power of GitHub and social coding, I can make these changes available to anyone who wants them. I have forked the RWMidi library on Github, incorporated the changes there, and issued a pull request to Manuel to include them in the main source.

I have also built a jar file and included it as a download on GitHub. You can get it here.

Having finally discovered the beauty of social coding, I plan on eventually uploading the source of both the HarmonicTable and the GOL Seqeuencer. I’ve had requests in the past to make changes to the synth that I just don’t have time for, this way people in the know can simply fork the source, make their own changes, and ask me to pull them into the main body of work.

As more and more regular Janes and Joes become savvy programmers (i.e. our children), I expect we’ll see the power of social coding change the way we think about how software is made in general…