Posts Tagged ‘ruby on rails’

If you’re reading this article, you’re wasting valuable time that could be spent building robots, in ruby, that battle each other to the death. I can hear you already: “Does such a thing actually exist??” Well no, not really. But there are two consolation prizes:

Rubots!

“Rubots” is the word you use when you’re referring to Ruby Robots but don’t want to waste precious rubot-coding time pronouncing the full phrase. My goal is simple: to create Ruby-based games where you can code your own player classes to battle against the sample players provided, other players you’ve created, or for the most fun: players created by your Rubyist friends!

The popularity of Rails has brought many people to the Ruby camp. Sadly, many of these people don’t learn how to code Ruby outside of the Rails environment. I got my start through Rails, and there were times I wasn’t sure where one ends and the other begins. I want to get programmers comfortable coding pure Ruby, and there’s no better way than the promise of digital violence.

Prisoner’s Dilemma

My first iteration of this concept is a Rubot implementation of the classic game theory exercise, Prisoner’s Dilemma. The gist is that you’re one of two prisoners who have been placed in separate rooms and questioned by the authorities for a crime. You have to decide whether to cooperate with your partner in crime by not saying anything, or betray them by cutting a deal. There are different rewards/consequences based on how each of the two players decides to act.

The README is pretty comprehensive, so I won’t rehash all of it here. The basic idea is that you create player classes that decide whether to cooperate with, or betray, their opponents. The game is played in multiple turns, so you can base your decision on how you and your opponent have behaved in previous turns, or any other criteria you want to consider. Maybe your prisoner gets grumpy around nap time, and from 2-4pm it only betrays its opponent :)

This is meant to be played tournament style, and so I’ve included a basic round-robin script that loads all player classes in the project directory and pits them against each other in a battle royale. This is a great activity for Ruby groups, especially among beginners. Player classes inherit from a prefabbed parent class, and simple strategies can be implemented with even a basic understanding of the language.

Enjoy, and please provide feedback. This will be the first of (hopefully) many games offered in this same style. Rubots unite!

I had an interesting experience at a code retreat with the creator, Corey Haines. I created some code that I felt was really perfect. I didn’t think there was room for much improvement, but it only took Corey a few seconds in passing to find a flaw. It starts with this list of rules for simple design:

Passes tests – the code should be test-driven, and the tests should all pass.

No duplication – often known as DRY – don’t repeat yourself. Every distinct piece of information in the system should have one (and only one) representation in the code.

Expresses intent – the code should be self-explanatory.

Small – methods, classes, indeed the entire application shouldn’t be any bigger than absolutely necessary.

My Original Version

I won’t explain what this code is supposed to do. That might defeat the point. See if you can figure out which principal I violated with this code. I’ll say it’s not Rule #1, but showing the tests would take up too much room.

The Problem

Corey asked me one question: what if one of the requirements changes? And there it was. In an attempt to do the most in the fewest lines, I’d over-refactored the method. Not only had I made the method brittle if business requirements should change in the future, I’d factored out the intent of the method itself.

As usual, one good software practice begets another. Test-driven development results in smaller, simpler methods for instance. And in this case, showing intent in your code reduces brittleness. So how do you accomplish this?

This code is longer, but it shows intent much more clearly. You don’t even need to know what “overpopulated” is to understand what the method is doing. But if you want to know, or need to change it, it’s easy. In fact, we’re passing neighbor_count around a lot, so it looks like it’s time to abstract this into a class:

Last night I hung out with some KC friends and we setup our dev environments for the event. I got motivated, and created a base environment on GitHub you can download. It runs your tests automatically using Watchr every time you save your code file, and if you’re on a mac it even takes a screen shot at each save! Now you can go back and relive the magic. Maybe string them together into a video with a little commentary, and boom – easy post-retreat blog video.

Use the link above, and let me kno w if it was useful!

*not the Wendy’s guy, as my wife likes to ask. You’d think since the world is down to just one living, notable Dave Thomas that joke would get a little old. I think the people who grew up watching Wendy’s commercials will also have to die out first :)

This is the last article in this series describing the concept of double-blind test-driven development. This style of testing can add time to development, but this can be cut significantly using RSpec matchers.

If you’re not familiar with matchers, they’re the helpers that give RSpec its english-like syntax, and they can be a powerful tool speeding up all of your test-driven development – whether you follow the double-blind method or not.

If you’re using RSpec, you’re already using their built-in matchers. Say we have a Site model, and its url method takes the host attribute and appends the ‘http://&#8217; protocol. Here’s a likely test:

describe Site, 'url'
it "should begin with http://" do
site = Site.new :host => 'example.com'
site.url.should equal('http://example.com')
end
end

The equal() method in the code above is the matcher. You can pass it to any of RSpec’s should or should_not methods, and it will magically work.

But the magic isn’t that hard, and you can harness it yourself for custom matchers that conform to your application.

The Many Faces of Custom RSpec Matchers

While I don’t want this article to turn into a primer on custom RSpec matchers (it’s a little off-topic), I’ll give you the three styles of defining them, and explain my recommendations. There are simple matchers, the Matcher DSL, and full RSpec matcher classes.

Let’s start by writing a test we want to run:

it "should be at least 5" do
6.should be_at_least(5)
end

This test should always pass, provided we’ve defined our matcher correctly. The first way to do this is the simple matcher:

As you might guess, actual represents the object that “.should” whatever – in this case “.should be_at_least(5)”. This version makes a lot of assumptions, including the auto-creation of generic pass and fail messages.

If you want a little more control, you can step up to RSpec’s Matcher DSL. This is the middle-of-the-road option for creating custom matchers:

RSpec::Matchers.define :be_at_least do |minimum|
match do |actual|
actual >= minimum
end
failure_message_for_should do |actual|
"expected #{actual} to be at least #{minimum}"
end
failure_message_for_should_not do |actual|
"expected #{actual} to be less than #{minimum}"
end
description do
"be at least #{minimum}"
end
end

Now we’re rocking custom failure messages, and test names. This is pretty cool, and honestly how I started out doing matchers. It’s also how I started out doing the matchers for double-blind testing.

The problem is that by skipping the creation of actual matcher classes, we lose the ability to do things like inheritance. Not a big deal if our matchers stay simple, but they won’t. Not if we use them as often as we should! I found myself re-defining the same helper methods in each matcher I defined this way.

So let’s see just how daunting a full-fledged custom matcher class really is:

module CustomMatcher
class BeAtLeast
def initialize(minimum)
@minimum = minimum
end
def matches?(actual)
@actual = actual
@actual >= @minimum
end
def failure_message_for_should
"expected #{@actual} to be at least #{@minimum}"
end
def failure_message_for_should_not
"expected #{@actual} to be less than #{@minimum}"
end
end
def be_at_least(expected)
BeAtLeast.new(expected)
end
end

This isn’t so bad! We’re defining a new class, but you can see it doesn’t have to inherit from anything, or use any unholy Ruby voodoo to work.

We just have to define four methods: initialize, match? (which returns true or false), and the two failure message methods. Along the way, we set some instance variables so we can access the data when we need it. Finally, we define a method that creates a new instance of this class, and that’s what RSpec will rely on.

You can add as many other methods as these four will rely on. But you also get other benefits over the DSL. You can use inheritance, moving common methods up the chain so you only have to define them once, instead of in each matcher definition. You can also write setup/teardown code in your parent classes, make default arguments a breeze, and standardize any error handling. I do all of these in the matchers I created for the example app.

The bottom line is this: defining your own matcher classes directly really DRY’s up your matchers, and that always makes life simpler. I think it’s the only way to go for serious and heavy RSpec users. It allows the class for my validate_presence_of matcher to be this short and sweet:

Summary

Now that you’ve seen my entire proposal for double-blind testing, let me know what you think. Be cruel if you must, it’s the only way I’ll learn. I’ll do the best to explain (not defend) my reasoning, and keep an open mind to changes.

I’ll also be publishing my double-blind matchers as a gem so you can add them to your project.

The last article in this series defined the concept of double-blind test-driven development, but didn’t get much into real-world examples. In this article, we’ll explore several such examples.

The Example Application

This article includes a sample app that you can download using the link above. Be sure to checkout tag “double_blind_tests” to see the code as it appears in this article. The next article will have a lot of refactoring. I limited my samples to the model layer, where 100% coverage is a very realistic goal, and this is likely to be the greatest benefit.

I chose a simple high school scheduling app with teachers, the subjects they teach, students, and courses. In this case, I’m defining a course as a student’s participation in a subject. Teachers teach (ie, have) many subjects. Students take (have) many subjects, via courses. The course record contains that student’s grade for the given subject.

The database constraints are intentionally strict, and most of the validations in the models ensure that these constraints are respected in the application layer. We don’t want the user seeing an error page because of bad data. Depending on the application, that can be worse than actually having bad data creep in.

In order to factor out our own assumptions, we have to ask what they are. The assumption is that the subject we add to the teacher’s subject list works because of the has_many relationship. So we’ll first test that teacher.subjects is, in fact, empty when we assume it would be. Then we’re free to test that adding a subject works as we expect.

Again, we’re challenging the assumption that the association is nil by default, by testing against it before verifying that we can add a teacher. This tests that this is a true belongs_to association, and not simply an instance method. This is the kind of thing that can and will change over the life of an application.

This example was actually explained in detail in the last article. Validate that the error doesn’t already exist before trying to trigger it. Don’t just test the default value when you create a blank object, test the likely possibilities. Refactor the error message to DRY up the test and add readability. And finally, test by modifying the object you already created (as little as possible) rather than creating a new object from scratch for each part of the test.

A more complex version is needed to validate the presence of an association:

While you can definitely start to see a pattern in validation testing, this introduces a new element. Instead of freshly setting the name attribute to be 51 characters long, we test the valid edge case first and then add *just* enough to make it invalid – one more character.

This does two things: it verifies that our edge case was as “edgy” as it could be, and it makes our test less brittle. If we wanted to change the test to allow up to 100 characters, we’d only have to modify the test name and the initial set value.

We’re doing the same here as in our testing of name’s length. We’re setting the edge value that’s *just* within the allowed range, then adding or subtracting a penny to make it invalid. I split up the top and bottom edge tests, because it’s better to test as atomically as possible – one limit per test.

In this case, we can’t avoid having to recreate the model from scratch, because the nature of the implementation. There’s no actual code in the model that makes this happen, it’s purely in the database schema. Why should we test it, then? Because we test any behavior we’re going to rely on in the application. The fact that this model behavior is implemented at the database level (and therefore, not purely TDD) is a small inconvenience.

What’s the assumption our double-blind test is verifying in this case? That the value is only set in the absence of other values being explicitly assigned. Testing with nil and blank values verifies that the default doesn’t override them – it only works in the complete absence of any assignment. I also test an arbitrary (but valid) value as the anti-assumption test before finally verifying that the default is setting to the correct value.

Most default tests verify only that the correct default value is set – the double-blind version verifies that it’s acting only as a default value in all cases.

Summary

The point of double-blind testing is bullet-proof tests, that can’t be reasonably thwarted by antagonistic coding – whether that’s your anti-social pairing partner, or yourself several months down the road. The bottom line is this: test all assumptions.

That being said, this is very time consuming, and we can see a ton of repetition even in this small test suite. What we need is a way to get back to speedy testing before our boss/client notices it now takes an hour to implement one validation.*

*Even if you work for a government owned/regulated institution that actually digs that kind of non-agile perversion, you WILL eventually go insane. Even in this small sample app, the voices in my head had to talk me off a building ledge twice.

The answer lies in RSpec matchers, which are easy to implement, and can grow with your application. The benefit is not just speedier development – it’s also consistency across your application. We’ll explore that in the last article of this series.

This is a three-part series introducing the concept of double-blind test-driven development in Rails. This post defines the concept itself, and lays the groundwork by showing the way tests are more commonly written. The next couple posts will show how to double-blind test various common rails elements, and how to make this added layer of protection automatic and quick.

Truth be told, if you’re seeing this in the wild the app is probably doing pretty good. This level of testing works great during the early stages of an app, when things are simple. But as things grow and/or multiple developers become involved, you need more.

Consider models where the associations and validations stretch into the dozens of lines. The more careful and specific you are about validations, the easier it is to get conflicting or overlapping validations. I actually came up with the concept of double-blind testing while retro-testing models in a client app that previously had no validation specs.

What is Double-Blind Testing?

In the world of scientific studies, you always need a control group. One set of participants gets the latest and greatest new diet pill, while the other gets a placebo. Researchers used to think this was good enough, and probably pretty funny to watch the placebo users rave about their shrinking waistlines. But it turns out studies like this still allowed some bias – as researchers observed the effects, their *own* preconceived notions tainted results. Enter the double-blind study.

In a double-blind study, the researchers themselves are unaware of which participants are in the control group, and which are being tested. Both sides are “blind”. They may have lost funny patient anecdotes, but they gained research reliability.

Applying the Lessons of Double-Blind Studies to Test-Driven Development

As I said, in the early stages of an app the tests I showed above work great, as long as you’re using TDD and the red-green-refactor cycle. This means you write the test, run it, and it fails. Then you write the simplest code that will make the test pass, run the test again, and confirm that it passes. Most testing tools will literally show red or green as you do this. Then, as you start to amass tests, you’re free to refactor your code (abstracting common code into helper methods, changing for readability, etc) and run the tests again at any time. You will see failures if you broke anything. If not, you’ve more or less guaranteed your code refactoring works properly.

The problem comes in when you start changing old code, or adding tests to processes that didn’t initially happen. What I’m calling double-blind testing is this:

each test needs to verify the object’s behavior before testing what changes.

As an example, let’s rewrite one of the tests from above:

# original test
describe "name" do
it "is present" do
teacher = Teacher.new
teacher.should_not be_valid
teacher.errors[:name].should include("can't be blank")
end
end

This is the basic pattern for all double-blind testing. We’re not leaving anything to chance. In the original version, we expected our object to be invalid, we treated it as such, and we got the result we expected. Do you see the problem with this?

Here’s an exercise: can you make the original test pass, even though the object validation is not working correctly? There’s actually a style of pair programming that routinely does exactly this. One developer writes the test, and the other writes just enough code to make it pass, with the good-natured intention of tripping up the first developer whenever possible. If you wrote the original test, I could satisfy it by just adding the error message to every record on validation, regardless of whether it’s true! Your test would pass, but the app would fail.

The test is now “double-blind” in the sense that we as testers have factored out our own expectations from the test. In this case, we expect the error message to not be there until we initialize the object a certain way, and this can be bad. It may sound far-fetched or paranoid*, but in large codebases your original tests are often abused in this very way. The “you” that writes new code today is often at odds with the “you” from three months ago that wrote the older code with a different understanding of the problem at hand.

*Plus, everybody knows it’s not paranoia when the world really is out to get you. I’ve discussed this at length with the voices in my head, and they all agree. Except Javier. That guy’s a jerk.

Now that I’ve laid out the justification, let’s take a closer look at how the test changed. The first thing I did was create a version of the object that I believe should NOT trigger the error message. Then I run through two cases that should. You can see right away, I was forced to be more *specific* about what should trigger an error. Instead of just a blank object with no values set, I’ve proactively set the attribute in question to both nil and blank. A key element here is to try to work with the *same* object, modifying between tests, rather than creating a new object each time. My test wouldn’t have been as specific if I’d just recreated a blank Teacher object and run a single validation check.

Also, with the increased code comes the increased chance of typos. We don’t want to DRY test code up too much, because a good rule is to keep your tests are readable (non-abstract) as possible. But I’ve specified the error message at the top of the test, and reused that string over and over. I did this in a way that DRY’s the code and adds readability. You can see at a glance that all three tests are checking for the same error.

Finally, the first time I run the object’s validation, notice I’m not asserting that it should be valid. If I had written teacher.should be_valid on line 8 of the double-blind test, I’d have to take the extra time to make sure every other part of the object was valid. Not only is this time-consuming, it’s very brittle. Any future validations would break this test.

If you use factories often, you may suggest setting it up that way since a factory-generated object should always be valid. Then you could assert validity. However, this only slows down your test suite. it’s enough just to run valid? on the object, which triggers all the validation checks to load up our errors hash.

Summary

I believe this is a new concept – I was already coding most of my tests this way, but it didn’t dawn on me how valuable it was until I started retro-testing previously testless code. The value showed itself right away.

I would love to hear feedback on this – if you think it’s unnecessary (I tend to be very rainman-ish about my testing code) or even detrimental. However, if you think it’s too much work, I ask you to hold your criticism until you’ve read part 3 of this article, where I show how to use your own RSpec matchers to greatly speed this process.

If you work with legacy databases, you don’t always have the option of changing column names when something conflicts with Ruby or Rails. A very common example is having a column named “class” in one of your tables. Rails *really* doesn’t like this, and like the wife or girlfriend who really hates your new haircut, it will complain at every possible opportunity:

Like the aforementioned wife/girlfriend, you’re not going anywhere until this issue is resolved. Luckily, Brian Jones has solved this problem for us with his gem safe_attributes. Rails automatically creates accessors (getter and setter methods) for every attribute in an ActiveRecord model’s table. Trying to override crucial methods like “class” is what gets us into trouble. The safe_attributes gem turns off the creation of any dangerously named attributes.

After including the gem in your bundler, pass bad_attribute_names the list of offending column names, and it will keep Rails from trying to generate accessor methods for it. Now, this does come with a caveat: you don’t have those accessors. Let’s try to get/set our :class attribute:

The setter still works (I’m guessing that it was still created because there wasn’t a pre-existing “class=” method) and we can verify that the object’s attribute has been properly set. But calling the getter defaults to…well, the default behavior.

The answer is to always use this attribute in the context of a hash. You can send the object a hash of attribute names/values, and that works. This means your controller creating and updating won’t have to change. Methods like new, create, update_attribute, update_attributes, etc will work fine.

If you want to just set the single value (to prevent an immediate save, for example) do it like this:

Basically, you can still set the attribute directly, instead of going through the rails-generated accessors. But we’re still one step away from a complete solution. We want to be able to treat this attribute like any other, and that requires giving it a benign set of accessors (getter and setter methods). One reason to do this is so we can use standard validations on this attribute.

We’re calling the accessors “class_name”, and now we can use that everywhere instead of the original attribute name. We can use it in forms:

# example, not found in code
<%= f.text_field :class_name %>

Or in validations:

# add to app/models/user.rb
validates_presence_of :class_name

Or when creating a new object:

# example, not found in code
User.create :class_name => 'class of 1995'

If you download the code, these additions are test-driven, meaning I wrote the tests for those methods before writing the methods themselves, to be sure they worked properly. I encourage you to do the same.

Part 1 of this series came out exactly 3 months and 3 days ago. Special thanks to a reader named Edward who prodded me to finally add the controllers and views to this.

Going beyond the model layer for nested comments introduces a new programming idiom: recursion. Some ruby developers may not be familiar with it – especially if your experience is mostly web-related, where the need doesn’t come up as often. Recursion in a nutshell is the act of a method calling itself. If you’ve seen Inception, The ability to have dreams within dreams within dreams means those dreams are recursive. If you haven’t seen the movie, think of russian matryoshka dolls. You won’t experience star-studded special effects with the dolls, but you’ll at least get the idea of recursion.

Unlike russian dolls or most of Leo’s recent work, recursion in software is potentially infinite. Practically speaking though, it’s more like the doll thing. After all, a system only has so many resources, and recursion is expensive in this regard – the method must copy itself in memory at each layer, local variables and all. On the plus side, they tend to be lightning fast compared to standard iteration using loops. And in our case, we’ll be hitting the database at each layer. We’ll ignore the dangers in our simple app, though.

Before you get too excited and start pulling out your Nana’s childhood russian doll set for comparision, this isn’t true recursion. It’s well documented that nesting resources any more than two layers deep is painful and unnecessary, so think of this as the lamest russian doll ever.

It’s not much bigger, but there’s a lot going on here! First, since comments are nested, we have to look for a parent. We’re only creating comments in this example, so we only have those related actions. Comments will always be shown on a post page.

The really exciting part is after a successful comment creation. How do we redirect back to the post page? For all we know, this comment could buried down 12 layers of replies. All we really have access to so far is the parent of the object. This necessitates a new model method:

Recursive functions are often short and sweet for two reasons: they’re already complex by nature, and adding more code than necessary would make them unmanageable. Also, they’re getting a lot done in just a few lines. In this case, the second line is the key: if “commentable” (the parent object) is a post, return that. Otherwise, call this same method on the parent, which will in turn check if *it* is a Post, and so on.

I could have written it shorter, like this:

def post
commentable.is_a?(Post) ? commentable : commentable.post
end

In fact, I did at first. But the extra code that checks and sets an instance variable is caching the result. This way, if we call the same method on an object more than once, it stores the result for future use. Remember, recursion can be expensive – especially when the database is involved.

Views

Finally, it’s view time, with one more bit of recursion for fun.

Or post views are standard scaffolding mostly, with the exception of the show view:

This partial is recursive! The comments controller doesn’t have a show method, because we’re never going to view a comment by itself. Instead, the show-like code is in this partial, and at the end it checks to see if *this* comment has comments. If so, it calls the partial again on the whole collection. The end result is a nested, bulleted list of comments. This is not very sexy if you fire up the code yourself, but it’s a great starting point.

Summary

Hopefully this article as done a good job of explaining both recursion, and how to use it to achieve nested comments in your applications. If you’re new to recursion as a concept, haven’t seen Inception, didn’t inherit russian dolls from Nana or receive them as a snazzy graduation present, and my explanation somehow fell short, it’s a well documented programming idiom. There are tons of resources online, so take the time to learn this powerful tool, then learn not to overuse it :)

Please download the code and play with it if you want to learn more – the code is fully test-driven so you can see how that works, which is just as important.

On a final note, I’m tempted to do a follow-up article with ajax and some nicer formatting. Perhaps in 3 months and 3 days…

Ruby is a dynamic language. One of the things it lets you do is define methods with an unknown (or variable) number of arguments. It does this using the splat operator. But the splat operator can actually be used for other things in your code, especially if you’re using Ruby 1.9. That’s because a small change to how splat operators work make them much more useful.

In the beginning

The humble splat operator was first used to slurp up unnamed arguments to a method:

def sum *numbers
numbers.inject{|sum, number| sum += number}
end

Sometimes, this is just syntactic sugar, because passing a list of numbers like this:

sum(1, 2, 3)

Is more intuitive and prettier, and less error-prone, than passing them as an explicit array:

sum([1, 2, 3])

In many cases though, the splat operator is more than just a pretty face. Take Ruby’s own method_missing instance method, that is available in every class. If defined, it will attempt to handle any calls to methods that don’t explicitly exist:

method_missing must be able to accept an unknown number of arguments, since just about any method call could be thrown at it. In this case, we’re using it to get and set instance variables without having to define them first using attr_accessor:

We’ve just created a class that lets us define any attributes we want, and our method doesn’t care whether *args contains zero arguments, or a hundred.

It gets unintentionally cooler

Up through Ruby 1.8 you could use this splat operator, in a limited fashion, for things other than method argument lists. You could use it to flatten an array, in contexts where it was the last element in the list:

The splat operator took the last_numbers array and expanded it inline! Now our new array contains five numbers, instead of two numbers and a nested array. This comes in handy for meta-programming. So let’s try putting the splat operator somewhere else in the array:

We can’t use the splat operator anywhere else except the end of a list, just like in method calls. This really limits its value in Ruby 1.8.

Things get intentionally cooler

As of Ruby 1.9, however, the splat operator has been given a little more love, and now it can be used almost anywhere. Basically, any array that is given the splat operator will “flatten” itself, and return the list of elements NOT in an array. Now we can do cool stuff like this:

This is handy for all sorts of meta-programming challenges – namely, handling dynamic argument lists. It’s also great when you’re building an array out of smaller pieces where some of the pieces are scalars (single values) and some are arrays. Let’s say we have a Family class, that contains myself, my parents, and my siblings. I want a method that returns everybody in one large array. The usage would look like this:

Make it up as you go

One way Ruby is dynamic is that you can choose how to handle methods that are called, but don’t actually exist. If you have a lot of very similar methods, you can even use this to define them all at once! Ruby does this using the method_missing method, which you override in the classes where you need more dynamic method calling.

ActiveRecord’s dynamic find_all_by methods

Ruby on Rails uses method_missing with ActiveRecord’s find_all_by methods. There is no find_all_by_name method, but if your Person model has a name attribute, you can call Person.find_all_by_name('Bob') and it will return all the records that match that name.

Here’s a very simplified version of how Rails handles find_all_by requests:

Using regular expressions, method_missing sees if the method name matches something we expect. It parses out the interesting parts, and uses them to look up the objects we’re searching for. This is a good use case, because the attributes of an ActiveRecord model aren’t known until runtime.

Dynamic methods for dynamic objects outside Rails

We can apply this same technique outside of Rails. Let’s create the world’s most dynamic Ruby class:

We’ve just created a Widget object that can have any attributes you want to give it. method_missing checks if the called method ends with an equal sign – if so, it assigns the value you passed, to an instance variable with that name. If there’s no equal sign, it tries to get the value of an instance variable by that name:

Use method_missing with methods that use blocks

You can also pass blocks to method_missing. Say we have an ActiveRecord model called Person, with name and age attributes. Let’s create something similar to find_all_by that gets the list of matching people, and runs them through the map method. We’ll call it map_by:

If a method is called that can’t be found, method_missing will check to see if it matches our map_by pattern, perform an ActiveRecord search, and push the results through map with the block we supplied.

Now let’s see if it works, by grabbing the names of all people in our database age 30:

I’ve done a few things. First, I changed our “if” conditional to a case statement, so that we can add to it in the future, and it will be clean and readable. I also moved the actual map_by code into its own method, for the same reason. And now, method_missing calls its parent method if it doesn’t find a match, to preserve inheritance.

You might also notice that instead of defining self.method_missing and self.map_by, I’ve wrapped these method definitions in a class << self block that essentially does the same thing. I think this is cleaner when you have several class methods.

method_missing can be used in any Ruby class, so long as you can anticipate dynamic methods that the users of your class might need, and preserve the chain of inheritance. This should be used sparingly, when you can cut down on method definitions by defining them dynamically. It’s easy to abuse this, and there is extra overhead involved. But for the right situations, method_missing can create shorter, more readable code.