Merb's approach was to have mass assignment protection in the controller, and I personally think it's self-evident that it belongs there. Moving it into the controller will also make it easier to solve the tension between reducing the friction of getting up and running quickly and having good security defaults.

In general, Rails' convention over configuration make a stock Rails app more secure by default (CSRF protections, XSS protection, timing attacks, session fixation, etc.). This is a case where there's a real tension, but I think that we can solve it by applying some brainpower to the question.

It is. The more interesting question is why it takes rails 7 years (and counting) to come to this conclusion.

<rant>

I'll take it one further and sing my song about the Rails ActiveRecord implementation here, which is tangentially related.

The promise of AR is to reflect on the database at startup and then "magically work".

The problem in rails is that nothing magically works. What you get out of the box is an insecure world-writable model, as illustrated by this bug. Then you begin scattering your truth over incremental (hand-crafted!) migrations, the model, and the controller, until nobody other than a cascade of unit-tests can even make sense of the interdependencies anymore.

In a world where there is not even a central place to look up which fields exist on a model and what their constraints are - short of runtime introspection, where database constraints live happily alongside and independently of model constraints, where opaque bits of ruby-logic buried in various gems add their own magic to the mix, in such a world it's really no surprise they chose to default to "all fields writable".

Because if they forced the user to do the sensible thing and explicitly list the allowable fields then where's the advantage over a declarative approach (like e.g. Django) anymore?

Imho it's long overdue to take a step back and revisit whether AR is still worth having, or ever was. Imho it causes way more problems than it solves, in contrast to the declarative approach.

No downvote here. I'm by far no Rails guru - I've done one moderate-sized project in it (as an apprentice to someone far more experienced) and a couple of smallish projects, Rails feels brittle. I thought I was pretty alone in that thinking, but based on your rant above, I'm not 100% on my own. I've gotten used to Grails/GORM, which has its own set of issues, for sure, but has always felt more natural.

While not mitigating most of the issues you listed, I thought I would mention the awesome https://github.com/ctran/annotate_models plugin in case you did not know about it. I personally could not work on Rails without it.

AR was born with a set of very opinionated decisions. I believe those that prefer a more declarative approach (and built-in identity map) can use DataMapper.

What do you think about the solution proposed in by homakov (the "hacker")? Just mark all the *_id attributes "protected" by default. Seems elegant and really easy to implement and fixes 90% of the problem.

I agree completely. The attr_accessible issue has been around for so long (I blogged about it in 2007: http://news.ycombinator.com/item?id=1031168), I'd assume it is by far one of the easiest things to exploit in a production Rails application. Personally, if I wanted to hack a Rails application, this is the first thing I'd try.

Your proposed controller technique is great. This technique coupled with some intelligent defaults seems like a good idea, with respect to both sides of the story (both a secure-by-default and frictionless convention). I'm excited to see this in the limelight, and your proposal is a fantastic start to this conversation. Thank you sir!

Currently, I like to use a hack that automatically makes all models use attr_accessible with no allowed attributes. Until you override it with attr_accessible in your model, nothing is allowed, so you have to think hard about what should and shouldn't be accessible.

However, it causes real annoyance when not dealing with web input. Applying it to an existing project and fixing everything it breaks is super-annoying.

Right, refactoring an existing project to enforce attr_accessible nil by default can be a hassle.

From my experience, the main (though easily side-stepped) annoyance is when creating or updating records that have belongs_to associations (for example, user_id and repository_id for a commit ;)) programmatically.

For security purposes, you would not set those 2 attributes to be attr_accessible. To create a new record, you then would have to build the record and then set the user_id and repository_id on the record.

Or, you can set user and repository to be accessible (attr_accessible :user, :repository). This is fine because the associated methods expect ActiveRecord objects.

Not every idea from Merb made it into Rails. Especially when an idea would cause significant backwards-compatibility breakage (return string from action vs. implicit rendering), we stuck with the Rails approach.

I always felt that "it's up to the developer to do the right thing" violates the normal Rails convention over configuration principles, but I also weigh breaking a large % of existing Rails apps in a way that is not easy to quickly fix heavily.

That said, this problem is almost identical to XSS protection. We were able to find a solution that mostly "just works" for new developers, with some caveats, but it broke nearly all existing apps in a way that required significant effort to fix.

Like mass assignment, previous vulnerabilities were caused by Rails defaults that caused most users to make mistakes (nearly everyone had at least a few cases where `h` was required but wasn't done).

Like XSS protection, we have a solution here that will mostly just work for the happy path. The end result is a Rails default that will be only marginal harder to use than what we have now, but secure by default.

> I always felt that "it's up to the developer to do the right thing" violates the normal Rails convention over configuration principles, but I also weigh breaking a large % of existing Rails apps in a way that is not easy to quickly fix heavily.

It can be argued that those apps were already broken. Nobody should complain against a security fix.

Posting it as an issue on the Rails repo and then exploiting GitHub with it is a great way to get attention, but not necessarily the most responsible.

I disclosed a vulnerability to GitHub before. I dropped it into their Issues system marked private with the heading "URGENT". It was a Sunday and I got a response + a fix from Tom Preston-Wener himself within a few hours. That, in my mind, would have been a more responsible approach.

That might be unprofessional but it was also audacious - and easily the best way to get the word out quickly about an 'in the wild' exploit that will impact a huge number of apps.

Additionally: GitHub is a critical piece of infrastructure for a huge number of companies, contractors and start-ups. On balance, protecting GitHub from public embarrassment is far outweighed by the potentional impact of this sort of flaw. If a good old public "git@github ~ $ rake over coals" is what is needed to ensure that our IP is sufficiently protected, so be it.

Egor discovered that a lot of big sites which use Rails suffer from very serious security issues because the common Rails practices and defaults don't produce secure sites. It's so non-obvious that he was able to impersonate others, avoid most of the checks...

He was aware of consequences, but he got mostly ignored. So he decided he had to get enough attention and publicity. There are too many sites too vulnerable to just wait.

I don't think he can be blamed too much though. As per the bug filed here - https://github.com/rails/rails/issues/5228, the bug was being closed by others after being given a cursory look, and was being reopened again for consideration. Maybe a little immature, but there was a mild provocation.

Fred Wu
So what's a good gem (if any) for safe guarding params on the controller level? @dhh's params.slice feels too dirty.
DHH ‏
yeah, it's too simple to be clean! I think you are using the wrong framework if you crave more complexity for its own sake.

The users who trust Github don't care if it was a "rails issue" or a "github issue"

It's a Github issue no matter what the cause or responsible party.

If Github was run on unpatched windows XP boxes with all IPs public, you wouldn't argue that it was Microsoft's fault because "everyone knows" what a bad idea that would be. The base assumption should not to be completely reliant on your environment and frameworks to be secure, because they very likely have un-patched bugs and exploits. You do what you can to harden yourself up, but when something goes wrong - it's your problem.

But (to continue your analogy), github was run on fully patched XP boxes, in which case: yes, it would have been Microsoft's fault.

And it would have ultimately been Github's fault (in the eyes of their users) in that case too -- because they are providing the service, and their choices (XP / Rails, Firewall / Application Firewall / Audit) is their fault.

Contrary to how a lot of things are handled today, fault can be shared by multiple parties. what Egor did was get exposure and reaction from a lot of involved parties, where he was unable to get any reaction from some of them (Rails team) in the past.

Well, it's a beginner-level Rails mistake; it is not precisely an obscure issue. Googling 'attr_accessible' will show you discussion going back years, and it has been actively exploited before. I'm shocked that the mistake was made in Github, though.

"Well, it's a beginner-level PHP mistake; it is not precisely an obscure issue. Googling 'SQL Injection/register_globals/get_magic_quotes/etc' will show you discussion going back years, and it has been actively exploited before. I'm shocked that the mistake was made in ..., though."

Ten+ years of register_globals, get_magic_quotes and SQL injection attacks (etc etc etc) in PHP show that even well-known issues still bite people everyday. IMO, it's up to the framework developers to make it easy to do the right thing and damn hard to do the wrong thing.

Obviously not beginner-level if more famous sites suffer from the same problem -- Egor mentions some in his posts and he is only one person -- imagine what all the black hats can do working in parallel.

By your reasoning, since tons of C programs can't use strcpy() correctly and thus contain buffer overflows, it takes a C master to use strcpy() correctly.
I'd say instead that the kind of reasoning is wrong and that even experts can make beginner-level mistakes (not so often). But these are the kind of mistakes a good framework/language/etc. should try to prevent - especially if security-related. Tons of research tries to reduce error-prone programming activities.

IMHO, it's a bad practice which is very widely promoted in Rails guides and scaffolding code.

Mass-assignment is very difficult to get right in any complex app (e.g. with multiple user roles) and the simplest way to avoid these problems would be to discourage the use of 'update_attributes' and teach people to explicitly set model properties.

(Of course, Rails folks probably hate the non-DRY aspect of this, but I prefer to look at my code and see exactly what's happening with user input.)

Yes, Github had vulnerable code. It's also true that Rails apparently defaults to leaving that bit of code vulnerable. A saner default seems in order, if even highly competent Rails devs can be caught by this.

I'd agree with it being "not the most responsible" way if you were talking about him publicly posting an exploit instead of privately talking to github. But the vulnerability was already known, he didn't disclose anything new, so I don't see the lack of responsibility.

Fact is, if he hand't exploited the vulnerability in a high profile site, the extent of the vulnerability would be unknown and it wouldn't have been fixed. Since (damage done) << (benefit) i think we should be thanking this guy.

I would say that homakov's angry and not very mature reaction to his warning being ignored just did a very big favor to a lot of rails developers, that reading about his exploit on HN (and other places) will rush to check their websites and will fix a LOT of serious vulnerabilities they didn't have any idea they had. But which somobody could have already been secretly exploiting.

Understandably, Github would have liked much more to be one of those companies that will be able to quietly fix this vulnerability without anybody knowing, but now that the damage to their image is done I really hope they'll not add to that damage by persisting in their banning of a 18 year old that acted irresponsibly yes, but maliciously - definitely not.

From a PR perspective, I guess that having titles like "Silicon Valley company rewards Russian teenager who helps them eliminating a security risk" could even spin the episode in their favor.

This is clearly a problem: with Rails' approach being 'right from the start', having no protection by default is not the right way to do it. This issue may be well known among the type of people that use github and read HN, but if someone had read about Rails being an awesome framework to make db-driven websites, they might not be aware of such a thing as a "mass assignment vulnerability."

If by adding a line or 2 to the code for generators can stop this, even if it includes a comment saying "Removing this line will do x y z", then I think the rails team could've treated the bug with a little more respect.

As @ericb said, if strong devs make this mistake, there's something wrong with the code.

I think it should also be noted that he didn't do anything malicious like trash repos, and even says on his blog:

"Then I could wipe any post in any project. That wasn't that funny but pretty
dangereous[sic]. It got more curious."

All he did was add a 3 line file to the master repo of a project that he was frustrated with. It generated all this attention, and will probably make them rethink the approach...

Finally: big props to the GitHub team for patching their vulnerability in <1hr on a Sunday...

It's a lot quicker to publicize a vulnerability than it is to patch, ascertain the scope, and verify the current situation. We're taking the time to make sure the information we'll relay to you is accurate.

> I don't feel entitled to a thorough explanation anywhere but through their own official channels

Nor do I. However, there was no indication given that an explanation would be given through official channels. If that indication was made with the "fixed it" comment, trust me, I would have kept my mouth shut and waited.

Then, he reopened the bug to prove it was a bug, and they closed it again.

Then, he submitted a new bug, 1001 years in the future, and they closed it, saying "Good one ;)"

Then, he committed a text document to master, and they got all upset about it and got Github staff involved.

How about this Ruby devs: actually consider what you're saying. Isn't the point of Ruby to make things simple? Yet, somehow, making rational default choices only applies to everything but this. I know that you love to feel better than everyone else (see: http://37signals.com/svn/posts/3124-give-it-five-minutes), but seriously, you shouldn't get all upset about it later when it turns out you're wrong.

This is why I refuse to learn Ruby. I don't want to be associated with that community. "Learn Ruby if you want to be associated with a bunch of people who call themselves Rockstar Programmers."

Its interesting how bring up their Terms Of Service gets discounted and the holy war just takes over. No where did I mention Rails, Ruby, community, or anything else.

Me: He broke the rules.
Retort: I like plant.

Disclaimer: The actions of individuals are representative of individuals. Drawing dramatic generalizations accomplishes nothing. Yes, there are more vocal asses in every community. For every asshat there is likely 100 quiet people. Right now, I'm being an asshat.

Gee, homakov is Russian. All Russians must want to hack your site. Really? No, its a dumb generalization.

"This is why I refuse to learn Ruby. I don't want to be associated with that community. "Learn Ruby if you want to be
associated with a bunch of people who call themselves
Rockstar Programmers.""

You refuse to learn a language due to how a project acts? Have you ever read the {open,free,net}BSD or Linux mailing lists? You're not going to make it far in software development with an attitude like that.

It's a fairly rational argument to have a preference/affinity for programming languages based on communities surrounding them. He stated his preferences, and that's perfectly fine. You might have been too quick in passing that judgement.

Unless a quantum computing breakthrough alters the way we control computers, there will never be a one-language to rule them all. We'll continue having a wide diaspora and will take evolutionary steps to bind ideas together. Preemptive and comprehensive security model might be just one of those ideas.

There's nothing obtuse about it, new programming ideas have happened since the 70s and 80s but as far as I know none of them is in Ruby, and you're basically assuming tylermenezes does not know about any of the (much older) features which did get into Ruby.

tylermenezes: the first "closed/open" thing was done by accident, @homakove didn't reopen it. I closed the second bug as this was just kind of trolling to get attention, there was no point on keeping it open.

Not all security vulnerabilities can be protected automatically by a web framework. In many cases, frameworks provide features that developers can use themselves to secure their applications.

Example:

XSS is a common web security problem. In short, it means that putting user-originated data back on the page unescaped is unacceptable. Before Rails 3.0, the Rails approach to this problem was to provide a helper (h), which you could use to escape content that you knew to be vulnerable.

Unfortunately, many Rails users did not use this feature in all places they should have used it. As a result, many applications (Twitter included) suffered from whack-a-mole XSS vulnerabilities.

We were unable to solve this problem in the 2.x branch, because automatically escaping all text being put onto the page would be a massive breaking change and would break every app in existence.

Further, simply escaping all text would not really solve the problem. For example, the "<form>" tag generated by Rails itself should not be escaped, while any of its contents or attributes provided by the application should. Asking the user to take on the responsibility to mark Rails-generated content as not needing escaping would reintroduce the same problem we had before: people would "unescape" things too eagerly, and apps would tend to have vulnerabilities.

The solution was to release a plugin for Rails 2.3 users (rails_xss) and change the default in Rails 3. It isn't perfect: there are still cases where applications have to mark strings as "safe", and applications that have to do so often might have the same problem, but I think we did a good job, all things considered.

This case is quite similar. Rails provides all the tools necessary to have a secure application (attr_accessible is the equivalent of h), but many apps don't use it correctly (or at all). In short, a Rails security default seems to be wrong, insofar as "wrong" means that many people fail to use the security feature, causing their applications to be vulnerable.

As with the XSS feature, fixing this problem requires some thought. Simply changing the default would break a lot of applications in the wild. Like XSS, it's probably correct to do so anyway here. However, like XSS, we should make sure that we have done everything we possibly can to mitigate the additional cost associated with complying with the new default. If it's too painful, many people will overeagerly bypass the feature, reintroducing the very issue we were trying to protect them from.

In this case, I have proposed a solution that I think will mitigate a lot of the problem (https://gist.github.com/1974187), and we will likely ship it as a Rails 3.2 plugin. This will allow us to gather feedback about unexpected consequences and ensure that when we change the default for Rails 4 (which has not even shipped a prerelease yet), it actually mitigates the security problem in the vast majority of Rails applications.

These issues need to be nipped very early on before they get too big to handle. Microsoft was stuck in this situation for many years because they had decided to favour compatibility over security. Rails now has a wide footprint on the web, and it all makes it even more important to enable it to be properly locked down.

I hope this marks a turning point in the way the Rails team think about security.

ps At least Rails no longer allows web crawlers to delete resources by issuing GET requests.

I don't like the suggestion. It enforces a 1:1 relationship between controller and model.

I'd rather have the model and the ORM be pulled apart and the model make this distinction.

Or create another class that knows how to safely pull values out of a params list and use it to create a model.

But it's bad enough that the controllers "look" like they belong to a model in default Rails generators. This creates a certain amount of laxity in programmer's thinking. It boxes their thinking in instead of letting their thinking go free.

For a normal project, sure, whatever, but when the security hole is in a framework--especially one as widely deployed as rails--you eventually must cede that, yeah, the fastest way to get something serious fixed is to do a public exploit.

What if he'd submitted the fix to github and they'd quietly patched it and said nothing? What if they'd said something but nobody cared because hey, it's fixed now? What if they'd flat-out ignored it as the rails devs did (perhaps even citing that as their reason)?

No, security only reliably gets addressed when it hurts and hurts publicly.

The combination of hitting the exploit on perhaps the most visible site for the target audience and doing so in a way that didn't harm anything is impressive.

This is NOT a vulnerability in Rails. If you've ever used Rails, you'll probably be aware that it has several features for preventing things like this from happening (attr_accessible, attr_protected, etc). GitHub failed to use those methods, so it's a vulnerability in GitHub. All sites powered by Rails are not necessarily vulnerable to this, only ones that have not been properly coded using the aforementioned security features.

"Buffer overflows are not a vulnerability in C. If you've ever used C, you'll probably be aware that has several features for preventing things like this from happening (strlen_s, memmove, etc.). Microsoft failed to use those methods, so it's a vulnerability in Microsoft. All apps powered by C are not necessarily vulnerable to this, only ones that have not been properly coded using the aforementioned security features."

And yet, people have stopped using C for a lot of things because of the security implications...

Look, when Rails is pushed as an out-of-the-box magic web stack--and it is--, and the out-of-the-box config has these problems available--which seems to be the case--maybe there is something to be said for the framework needing fixing.

Just because I can write code that prevents various exploits does not mean I expect the rest of the world to--and I sure as hell don't do so while brandishing about how even beginners can use my tools.

Provided that the he did so and made the change public, and the modification was not malicious?

Yes. I'd even send him a thank-you email, and add him to our list of contributors.

I'd then of course contact the customer (probably over phone, as quickly as possible), explain the vulnerability, explain what happened, and explain how we were fixing it. Then I'd write a post about it, and put it on the front of the site.

Today we had a demonstration by a user (xxx) of a security vulnerability on our site.

${VULNERABILITY_EXPLANATION}

We believe that it is possible that your application or records are covered in the scope of the exploit, because of the fact that ${VULNERABILITY_APPLICATION}.

In order to fix this issue, we have ${VULNERABILITY_PATCH}.

We have thanked this user for their vigilance in spotting bugs and security weaknesses in our site, and they have been added to our contributors-security page here (link).

We have a stance that security is something that can only be improved by lots of inquisitive eyes, and so if you have seen any issues that concern you, please do not hesitate to inform us and/or demonstrate the vulnerability--provided, of course, you do so without breaking anything permanently. :)

Our service is better today than it was yesterday, and we hope that with the patience and openness of our users it will be still better tommorrow!

I've yet to see any of the people conflating the Rails repository with the Github team reply to any of the (many) people in this thread pointing out that creating a ticket in the Rails repository is in no way the same as warning the Github team, so I'm really curious - is there general confusion about where the lines are between Github and Rails, or are you just being obstinate?

Actually the problem is not which repos he accessed (they can easily determined by looking at their logs), the problem is if/how_many other repos have been accessed by blackhats who knew this exploit before, and what kind of commit they've done to all these projects.

It doesn't hurt that the vulnerability was already discussed in a bug report by the 'attacker.' I'd imagine it would have taken a longer time had they needed to track down how exactly this happened, though I guess I'm making an assumption on what their logging/auditing/reporting is like.

Can you give him some slack? It's sunday, he says the vulnerability is fixed and this is all you should care about. Surely you would rather have him looks if it's really fixed rather than issuing an official statement. Surely an official statement can wait a day or two no?

For reference, it's easy to determine if a Rails application is vulnerable to mass assignment attacks by attempting to set an attribute that is unlikely to exist (e.g., user[asdfg]). In the majority of cases a 500 error will be returned if the application is vulnerable.

We included a spec in our rails app that inspects all AR models and errors if any don't specify any attr_accessible (with a whitelist of models to ignore). This catches anything included by plugins too, which can be helpful.

So you're actually proud of writing that sleazy piece of damage-control distraction bullshit?

You're completely misrepresenting what happened. Someone pointed out that Rails makes all Rails applications astoundingly insecure by default since forever, and got condescendingly dismissed several times by the people in charge.

He then proceeded to make a point by demonstrating the severity of the security flaw, and you made him look like some malicious hacker that got swiftly punished by the ever-vigilant GitHub team.

Public Key Security Vulnerability, really? You detected the attack and expunged the unauthorized key? What a load of bullshit.

Until proven otherwise, all code hosted on GitHub must be assumed tainted, where potential 0-day has been inserted.

You should consider revealing a history of all public key changes for each project (assuming you still retained apache logs) so that people can decide for themselves how much work they have ahead of them to re-audit their past commits.

This is a pretty huge security issue with wide-reaching implications. People everywhere pull and compile from master branches on Github without second thoughts. It's a big hole. But everybody's running defense for GitHub.

Contrast this with the enormous hue and cry against FB, MS, et al. when they have comparatively minor holes in their systems.

I'm not saying that we need to tar and feather GH, but we should at least be equal-opportunity in our condemnations and realize that everybody is capable of mistakes. So, if you're OUTRAGED about Apple making a minor boo-boo, you should be equally outraged about this.

Funnily, the first Diaspora release had the same issue and the devs were ridiculed and called noobs by a big part of the HN community and security "experts" wrote big posts about it. The different reaction here is interesting to say the least.

Nope, Diaspora also had this exact issue, which would let you use anything where a params hash updated a user model to e.g. overwrite their credentials or encryption keys. The specific exploitable example I found would have let you do it even if they had been checking authorization to update objects properly, because the attacker could reassign his own objects as the victim's objects with arbitrary attack payloads, one of which being sufficient to compromise the victim's account.

When I wrote a journal article about it my recommendation was that Rails ship with

ActiveRecord::Base.attr_accessible(nil)

by default, because otherwise vulnerabilities of that nature were virtually inevitable.

Ensuring input is properly validated is always the developer's concern. A framework can make that easier for you, but especially in cases like this, which are basic user authentication concerns, it is ABSOLUTELY the developer's responsibility to be 100% sure how all of those abstractions work and ensure there are no leaks anywhere.

You can't punt on security. Abstractions may make it easier, but you better be damn certain how they work.

Agreed. Once we got the SQL-Ledger codebase somewhat stabilized security-wise in LedgerSMB, we started removing it in favor of a secure-by-default framework. Our approach might not work well for other projects and it has its own security pitfalls (which we clearly document, and prevent by default), namely the fact that credential reuse is important and therefore you have to do something like HTTP-BASIC or HTTP-KRB5 auth (and hence for the former SSL really is required).

The way we do it is by not trusting the application. All authorization happens by levels further back, such as permissions granted to run stored procedures or on relations (permissions are granted to relations where there really is a clear, coterminous mapping between relational operations and procedural ones).

We then provide a framework for handling all of this. It means that as-yet-undiscovered flaws in our application are not as exploitable as they might be because the app is not trusted by the database.

The scaffold Rails generates isn't insecure by default, it just doesn't automatically generate example cases that require input validation. Perhaps it should be laden with comments about how to validate input when your code does eventually require it.

You can't blame the framework for people failing to validate input. The framework can provide you abstractions for helping you validate input, but at the end of the day, if you're being lazy or sloppy about what you do with data from the outside, that's your fault, not the framework's.

This is basic security, folks: never drive your application from stuff that comes over the wire (or any untrusted channel). Passing a params array directly to update_attributes is a fundamentally flawed approach for this reason. Instead, inspect incoming data for exactly what you expect to be there. By doing this, malformed input will either fail or be ignored without exposing a security vulnerability.

It should be obvious that you can't anticipate every potential attack vector at design time. Therefore, a well-designed system is one for which, when expected or normal conditions are not met, the resulting action is nothing or error, not an unexpected action.

I'm confused. Is this a generic Github vulnerability or is this a vulnerability in tools outside of Github used by Rails? The 'hacker' seems to suggest it's the former ("Github pwned"), which would be pretty serious stuff.

I think this is related to an issue he opened on Rails[1] which would suggest that GitHub isn't protecting against malicious mass assignment.

By default, if you have an new, create or update_attributes (and more, I imagine) call which changes various attributes based on a hash from parameters (eg params[:post], where you have params[:post][:title], params[:post][:body] etc) Rails allows mass assignment of every attribute on that model, since attr_accessible is not called.

There is a method you can call in the model called attr_accessible that restricts the columns that can be updated through mass assignment, while still allowing for manual assignment of other columns.

An example of this might be a post's user_id, which you would usually want to set to the current user while not allowing mass assignment. Without specifying attr_accessible it would mean that if a malicious user added params[:post][:user_id] to their POST/PUT, the Rails application would update the user_id as per the params value. If attr_accessible had been called, defining the columns that the developer wanted to be mass assigned (say post and title), it would mean that the user_id would not be mass assigned and Rails would log that this was the case.

attr_accessible therefore acts as a whitelist for columns that can be mass assigned. It just so happens that the Rails default is to have no whitelist and allow all columns to be mass assigned, despite the fact that the sensible option is to always have a call to attr_accessible in your models.

GitHub uses Rails. Depending on who you ask, either Rails is at fault for having unsafe defaults or GitHub is at fault for not reading the documentation. It just so happens that he demonstrated his point on the Rails repo because he thinks it's a failing of Rails, but it could have been any repo.

It can all be tracked back to the well-known "Mass assignment/attr_accessible vulnerability" in Rails. Demonstrating that even Github - full of very smart engineers - is vulnerable is just an attempt to raise (even more) exposure to the problem.

If you look at the bug report, the core Rails Dev Team basically said that they like the defaults the way that they are. They have/had no intention of changing the defaults, and are trying to push responsibility on to the developers using Rails to use sane config settings.

Looks like the guy did report it and the response was: "Not our problem" / "Not an issue." He got frustrated and decided to make a very public example of how this is bad.

The Rails team essentially argued that it isn't on them to secure sites built with Rails. Rails provides tools to avoid this, and GitHub chose to not use them, so this is a vulnerability in GitHub. They should be using attr_accessible, and they're not.

This just reminds me of PHP years ago where making it 'easy to develop for' took precedence over making sure that the defaults were safe (security-wise). The issue with this line of thinking is that by making it easier, you attract more less-experienced devs to your platform. Devs that don't know how to read all of the documentation and configure settings in a safe way. So what you're essentially doing is giving the inexperienced more rope to hang themselves with. Just because not everyone will be burned by a lack of security doesn't mean that the 'benefits outweigh the downsides.'

The problem with this attitude is that such caveats are not mentioned where it would be most prudent to do so. Which means you have scaffolding and documentation that shows you can use mass assignment, but totally fails to mention what you also need to do if you're going to put that code into production. Instead, it's hidden in a totally separate section in the guides.

The approach this encourages is to code insecure (because you don't know any better), without any awareness of it being insecure, then remember to go back and secure it when you've found out what all the vulnerabilities are. Inevitably, you will waste time, and probably miss things out.

I agree the message is the most important part of this despite the immature way he exposed this GitHub security issue.

Rails can certainly adopt an 'opinion' regarding this issue, but if I think if we were to take a look around at heavy web frameworks today, we would see a very similar approach of "let the developer decide" when dealing with Model security and serialization of fields.

These framework devs have no idea how people are going to use their models, so forcing them to whitelist everything by default may cause unnecessary headaches. Instead, they provide tools to prevent this exploit from happening should devs expose the model.

> if I think if we were to take a look around at heavy web frameworks today, we would see a very similar approach of "let the developer decide" when dealing with Model security and serialization of fields

What everyone else is doing isn't a great justification--if decisions in Rails were based on what everyone else was doing, it would have been written in Java. In terms of Rails opinions, sensible defaults would be one that would suggest this should be rethought. In terms of rewrite-work, the Rails team didn't shy away from that with Rails 3, but I think the cross site scripting protection was worth the work. And even if the default is changed, nothing stops it from being a single line of code to turn it off.

It could be far less nefarious than getting passwords or adding SSH keys. If, e.g. the `repo_id` attribute is not protected (and there are no further authorization checks), he could just send a POST and set the repo_id to the Rails project instead of his own repo and his commit could be pushed to master.

The issue seems to be a result of an entire hash of the parameters passed in the request being sent to the new method for models.

Is this really common practice in Rails code? Sure you can specify in the model that certain attributes can't be changed. But shouldn't this stuff be checked when validating form input? Normally I'd have a hash of filters, with the field/column name mapped to the appropriate set of rules for that field. Anything that's not specified in the filters doesn't go to the model's new method.

In this case, it's like the equivalent of PHP code where you santizie the data in $_POST and just send that whole variable to the database.

Choosing which fields are accepted shouldn't just be a model security issue, it's a form validation issue. This makes choosing whether an admin can change a field or not trivial as well. If a request is made as an admin, (maybe through a form that's only accessible by someone with an admin role) then you just apply the validation rules for an admin. Otherwise you apply the rules for a user.

It's common in tutorials and quickstart guides, usually with a note saying that you really should be protecting things and not using mass-assignment. It just makes the blog in 5 minutes videos cleaner.

Reminds me of all the PHP tutorials floating around that only treat security as a side note. Essentially, the default configuration with Rails seems to be that users can set the data for any database column. It's almost as bad as register_globals...

Yes, mass assignment with Model.new(params) and model.attributes=params is a best practice for professional production Rails websites. Business and security rules for field updates are coded in the model (attr_accessible/attr_protected/validates).

That's how it's been since Rails 1. Which is cool. But it's error-prone for newbies, especially when Rails's model and controller generators make all attrs writeable by default, with nary a generated comment about how or why to lock things down. In a culture of convention over configuration, attrs should be locked down by default: "config.active_record.whitelist_attributes=true" for new apps, and throw a helpful message when I mass assign to a model that has no accessible attrs configured yet.

What I want you to see in that thread I mentioned is the
way the core team perceives this. You are not discovering
anything unknown, we already know this stuff and we like
attr protection to work the way it is.

Looks like this guy got really frustrated with the Rails devs basically saying that he didn't know what he was talking about. This reminds me of all of the unsafe defaults that PHP used to have. Same justification too, "it's a config setting, so it's up to the developer/sysadmin to read the docs and set them right."

This justification seems especially odd to me since Rails did so much in the first place to popularize the idea that the default behavior should be the one most likely to be "right". Don't they (or didn't they at one time) have a mantra "convention over configuration"?

This also makes it all the more serious. As the PHP developers found out the hard way:

When you make it really easy to get started, a lot of people won't learn the system in depth enough to understand all the issues because they don't need to in order to make it work "well enough" for most cases.

By making Rails so easy to get started, they pretty much guarantee that there's going to be a ton of developers that don't pick up on, or forget, that they need to deal with issues like this.

That even a site like Github was vulnerable to this demonstrates just how seriously wrong it is to pick a default like this..

This guy brought up the vulnerability and the maintainers didn't seem to take it seriously since he wasn't articulate enough or was not approaching them with enough respect maybe for their liking? I wish they would have kissed his ass a little to get the low-down on the vulnerability so I didn't have to worry about my company's private github repos. He deserves props for bring it up for discussion.

This has been discussed since the early days of Rails, and they have chosen to leave their defaults as such and encourage developers to implement model security as needed. Github (seemingly) did not implement model security. This is a vulnerability that is different from application to application, and if the team was following best practices, is not there.

I don't get how many of you seem to take this issue lightly. Imagine what would've happened if this guy was a black hat. I use github for private code hosting and this a definite breach of trust and if I don't trust github how can I pay them? Sure they fixed the issue within an hour but still this could have ended far worse.

So, the vulnerability was public for at least days in homakov's bug report, and probably for years to anyone who wanted a crack at github badly enough to do a little research. Is it paranoid to worry about malicious commits in other important github repositories?

In my uses of rails, belongs_to associations get changed frequently. Any time anything gets 'assigned' to something, this occurs: assigning something to a user, to a milestone, to a plan, to an account, maybe even to a priority. YMMV but I'd say this isn't minute.

That said, there are certainly places in my apps where I don't want this to occur. And whitelisting is much better than blacklisting!

"There is probably a minute number of cases where someone wants mass assignment changes to include the parent's id of that record"

So it sounds like you aren't using mass assignment to change parent ids either. Most people don't. Most people use mass assignment to change the attributes of an object, not that objects association to a parent.

Insecure by default. Microsoft used to be the laughing stock because the default install was vulnerable to exploits. While one might argue that Github devs should have known better, the counterargument is that if Github devs couldn't get it right, think about the thousands of people trying out rails for the first time, building their little web app.

Technically it would be more like: "I just set it to safer defaults, since you guys won't. By the way, if your defaults are not strict enough for Github - where you trusted to host your code - gets them right, that pretty much makes them unsafe defaults."

Regardless of the 'hacker's motives/personality, I think this is yet another testimony to the power of open source.

When you have this many eyeballs looking at your code, the odds of a good-intentioned (however playful/immature) coder to discover a vulnerability is much greater than those of a real ill-intentioned hacker simply due to the sheer number of the former. The issue will quickly get fixed by the community, the kid will get the attention he wants, and life will go on.

The vulnerability is that Rails is insecure by default. That used to be the case for a lot of things, then finally people noticed how the real world works, and started fixing them. Apparently the Rails developers have actively resisted the lesson everyone else already learned.

Lots of projects provided the tools to do lots of things right in years past, but they eventually came to recognize that if they didn't provide secure defaults, they were ultimately harming everyone out of some twisted sense of principle. Insecure defaults are thus now considered a vulnerability in the original project.

There's very popular idiom in Rails development of updating model data from a form POST/PUT in one line of code in a controller:

my_object.update_attributes(params[:object])

Because many users follow this approach, it made this hack widely exploitable. You can either assign parameters piecemeal in the controller or explicitly set the attr_accessible attributes in the model. There's nothing inherent in Rails that caused this vulnerability, rather it was programming practices by developers.

There's a fine line between (1) "insecure by default" and (2) the existence of a very popular idiom that is dangerous unless accompanied by other checks.

Many PHP apps used to rely on register_globals without proper input checking, and when those apps got hacked, it was clearly their their own fault. Just like GitHub is primarily responsible for today's exploit. But that didn't prevent people from calling PHP "insecure by default" for enabling register_globals in the first place.

Even a genius eventually makes a mistake. Systems should be safe by default unless there's a good reason (e.g. performance). Why bother with high level tools if they're not even protecting you from mistakes?

It's unfortunate that the guy stumbling upon this apparently noteworthy vulnerability happens to be so utterly immature. [EDIT: Unsurprisingly, the dude's 18. See http://homakov.blogspot.com/p/about-me.html for reference.]

Looks like he's in Russia. Good luck. It wasn't that long ago where it was mass belief that this stunt is what got you white-hat jobs. It still seems the sensible option is to rescue someone into the legal security analysis field before one of the person's targets overreacts to a financially harmless taunt by destroying someone's life.

It's a pretty insane leap of logic to maliciously attack the Github website to prove your point regarding a framework hosted on that website. Politely contacting people about the vulnerability, rather than pulling that crap in an extremely public venue seems like the more mature, and less incredibly illegal way to go about things.

The "attack" clearly wasn't malicious, though probably immature. A malicious attacker would have been doing things like gaining access to users' private repos and stealing the code, or trying to sneak in harmful commits to a repo under a false name. This was at most a prank or a demonstration. So why this instead of responsible disclosure?

The problem is that it was not really a GitHub issue, it's Rails having a grossly insecure default setting. According to one of the comments in the bug tracker a lot of other high-profile sites also have the same problem. And presumably new ones would keep popping up for as long as Rails is the new hotness.

So disclosing the problem to GitHub would not solve anything. They'd deploy the fix, but a lot of other sites remain vulnerable. That's probably the case even if GitHub were willing to take a PR hit and admit they'd been insecure for a long time in order to spur other Rails users to fix their code. After all, they hadn't fixed their code after the previous widely published security problems caused by the same underlying issue.

Clearly the Rails core team were not willing to consider any kind of changes to improve the situation. As such, you can argue that making as big a scene as possible is the best way to improve security globally. It's of course unfortunate for GitHub that he chose to use that site as the example due to the obvious reasons. And certainly the timing is about as unfriendly as possible from the point of view of a west coast person.

* it provides a how to for other individuals to repeat the attack, in a public forum.

* it was made against an innocent third-party.

* I doubt steps were taken to contact the third-party.

* it was made on a Sunday morning. making it difficult to scramble and get a fix out the door.

"Clearly the Rails core team were not willing to consider any kind of changes to improve the situation"

The ticket was opened three days ago. Are you advocating that if an issue isn't resolved in an open-source project, in under a week, the individual raising the issue should be able to publicly attack anyone using the project?

Also, I'll put this gem of a quote out there:

"not only github is vulnerable this way - I found a lots of rails apps that are waiting for my hack! Yeah, it is only start" (mwahahahaha).

Whether it was malicious depends on the motives, and it's very hard for me to see where the malice is. To me it looks more like he just wanted to bring attention to the issue in order to get it fixed.

I already agreed that GitHub were innocent bystanders and that the timing was unfortunate. But if getting publicity to the issue was the main point, it's also easy to see why GitHub was the perfect target. I also already explained why it could make perfect sense to demonstrate the vulnerability in a public manner rather than just disclose it to one of the many sites suffering from the problem. None of that is a sign of malice, it's at most bad judgement.

The ticket having been opened only three days ago would be a good point if it hadn't also been closed and declared to be working as intended with a pointer to a previously closed bug about the same issue.

Agreed. Immature - certainly. I don't think his actions were done with malicious intent. Maybe insolent, but that only counts in the military. If he wanted to he could have made his point in a far more malicious manner.

And the octocat tattoo... How can you not like a guy with an octocat tattoo!

He could probably have made a lot of money from this if he hadn't disclosed it. A lot of people store various secret keys along with their apps on github. Many people even have entire repositories of secret keys. By taking access to these repos (if he could get access to private repots) he could likely have gotten access to apps from which he could have taken money.

If we're discussing how adults act, then I definitely would not refer to reporting a crime as "tattling" (I might expect that from a child in Elementary school, though). Zero-day attacks are extremely irresponsible, and this is probably against the law.

Given the guy's English, I doubt he lives in the US and I don't think this is a serious enough case to start an international procedure. I surely hope not, that would just be a huge waste of money and resources, the guy doesn't look like a rich man anyway.

I imagine it'd be pretty hard to get law enforcement interested in this. It's hard enough getting them to care about a single stolen credit card number - imagine reporting "someone added a harmless comment to prove a vulnerability" to the cops.