When Facebook started noticing that a particular kind of message was getting a negative response among users, employees would contact developers and suggest design changes. That was bad for Facebook--it sucked up employee bandwidth--and it was bad for developers, who felt like Facebook was micromanaging them.

Interestingly, Facebook decided not to go the law and order route. They didn't start writing out long lists of rules about what kinds of messages would be allowed and what kinds wouldn’t. Instead, Taylor told Fast Company in an interview following the talk, they built an automated system that monitored each individual message--and then took action, again, automated, against the specific messages that seemed to be bothering users.

Specifically, the system tracks whether recipients hide certain messages or mark them as spam, or whether they click "Like" on the message or comment on it, or whether they actually click through to see the application itself. "Using a bunch of signals like that, we're able to infer the likelihood that something is a high-quality message," Taylor said. Or, alternatively, if it is low quality.

[...] All of which bodes well for Facebook. The better experiences it can create while minimizing the demand on its own resources, the faster it will be able to grow, and the more loyalty it will get from its users and developers.