Technically you answered your own question, compress the JSON and be done with it. Most importantly you never mention an actual business case for spending money and time on this activity.
–
Jarrod RobersonJun 22 '12 at 16:41

2 Answers
2

If you implement, you need to change not just your server, but all clients (although you can support both formats and only change clients as needed). That will take time and testing, which is a direct cost. And don't underestimate the time taken to really understand protocol buffers (especially the reasons to make field required or optional), and the time taken to integrate the protobuf compiler into your build process.

So does the value exceed that? Are you faced with a choice of "our bandwidth costs are X% of our revenues and we can't support that"? Or even "we need to spend $20,000 to add servers to support JSON"?

Unless you have a pressing business need, your "pros" aren't really pros, just premature optimization.

i maintain apis and somebody before me added protobuf (because it was "faster"). The only thing faster is RTT because of of smaller payload, and that can be fixed with gzipped JSON.

The part that is distasteful to me is the relative work to maintain protobuf (compared to JSON). I use java so we use Jackson object mapping for JSON. Adding to a response means adding a field to a POJO. But for protobuf I have to modify the .proto file, then update the serialization and deserialization logic which moves the data in/out of the protocol buffers and into POJOs. It's happened more than once that a release has happened where someone added a field and forgot to put in either the serialization or deserialization code for the protocol buffers.

Now that clients have implemented against the protocol buffers, it's almost impossible to get away from.