How Can Donors Incentivize Good Predictions on Important but Unpopular Topics?

Altru­ists of­ten would like to get good pre­dic­tions on ques­tions that don’t nec­es­sar­ily have great mar­ket sig­nifi­cance. For ex­am­ple:

Will a repli­ca­tion of a study of cash trans­fers show similar re­sults?

How much money will GiveWell move in the next five years?

If cul­tured meat were price-com­pet­i­tive, what per­cent of con­sumers would pre­fer to buy it over con­ven­tional meat?

If a donor would like to give money to help make bet­ter pre­dic­tions, how can they do that?

You can’t just pay peo­ple to make pre­dic­tions, be­cause there’s no in­cen­tive for their pre­dic­tions to ac­tu­ally be ac­cu­rate and well-cal­ibrated. One step bet­ter would be to pay out only if their pre­dic­tions are cor­rect, but that still in­cen­tivizes peo­ple who may be un­in­formed to make pre­dic­tions be­cause there’s no down­side to be­ing wrong.

Another idea is to offer to make large bets, so that your coun­ter­party can make a lot of money for be­ing right, but they also want to avoid be­ing wrong. That would in­cen­tivize peo­ple to ac­tu­ally do re­search and figure out how to make money off of bet­ting against you. This idea, how­ever, doesn’t nec­es­sar­ily give you great prob­a­bil­ity es­ti­mates be­cause you still have to pick a prob­a­bil­ity at which to offer a bet. For ex­am­ple, if you offer to make a large bet at 50% odds and some­one takes you up on it, then that could mean they be­lieve the true prob­a­bil­ity is 60% or 99%, and you don’t have any great way of know­ing which.

You could get around this by offer­ing lots of bets at vary­ing odds on the same ques­tion. That would tech­ni­cally work, but it’s prob­a­bly a lot more ex­pen­sive than nec­es­sary. A slightly cheaper method would be to de­ter­mine the “true” prob­a­bil­ity es­ti­mate by bi­nary search: offer to bet ei­ther side at 50%; if some­one takes the “yes” side, offer again at 75%; if they then take the “no” side, offer at 62.5%; con­tinue un­til you have reached satis­fac­tory pre­ci­sion. This is still pretty ex­pen­sive.

In the­ory, if you cre­ate a pre­dic­tion mar­ket, peo­ple will be will­ing to bet lots of money when­ever they think they can out­perform the mar­ket. You might be able to start up an ac­cu­rate pre­dic­tion mar­ket by seed­ing it with your own pre­dic­tions; then savvy new­com­ers will come and bet with you; then even savvier in­vestors will come and bet with them; and the pre­dic­tions will get more and more ac­cu­rate. I’m not sure that’s how it would work out in prac­tice. And any­way, the biggest prob­lem with this ap­proach is that (in the US and the UK) pre­dic­tion mar­kets are heav­ily re­stricted be­cause they’re con­sid­ered similar to gam­bling. I’m not well-in­formed about the the­ory or prac­tice of pre­dic­tion mar­kets, so there might be clever ways of in­cen­tiviz­ing good pre­dic­tions that I don’t know about.

An­thony Aguirre (co-founder of Me­tac­u­lus, a web­site for mak­ing pre­dic­tions), pro­posed pay­ing peo­ple based on their track record: peo­ple with a his­tory of mak­ing good pre­dic­tions get paid to make more pre­dic­tions. This in­cen­tivizes peo­ple to es­tab­lish and main­tain a track record of mak­ing good pre­dic­tions, even though they don’t get paid di­rectly for ac­cu­rate pre­dic­tions per se.

Aguirre has said that Me­tac­u­lus may im­ple­ment this in­cen­tive struc­ture at some point in the fu­ture. I would be in­ter­ested to see how it plays out and whether it turns out to be a use­ful en­g­ine for gen­er­at­ing good pre­dic­tions.

One prac­ti­cal op­tion, which goes back to the first idea I men­tioned, is to pay a group of good fore­cast­ers like the Good Judg­ment Pro­ject (GJP). In the­ory, they don’t have a strong in­cen­tive to make good pre­dic­tions, but they did win IARPA’s 2013 fore­cast­ing con­test, so in prac­tice it seems to work. I haven’t looked into how ex­actly to get pre­dic­tions from GJP, but it might be a rea­son­able way of con­vert­ing money into knowl­edge.

Based on my limited re­search, it looks like donors may be able to in­cen­tivize dona­tions rea­son­ably effec­tively with a con­sult­ing ser­vice like GJP, or per­haps by do­ing some­thing in­volv­ing pre­dic­tions mar­kets, al­though I’m not sure what. I still have some big open ques­tions:

What is the best way to get good pre­dic­tions?

How much does a good pre­dic­tion cost? How does the cost vary with the type of pre­dic­tion? With the ac­cu­racy and pre­ci­sion?

How ac­cu­rate can pre­dic­tions be? What about rel­a­tively long-term pre­dic­tions?

As­sum­ing it’s pos­si­ble to get good pre­dic­tions, what are the best types of ques­tions to ask, given the trade­off be­tween im­por­tance and pre­dict-abil­ity?

Is it pos­si­ble to get good pre­dic­tions from pre­dic­tion mar­kets, given the cur­rent state of reg­u­la­tions?

1. I’ve been do­ing a de­cent amount of think­ing & ex­per­i­men­ta­tion in similar work re­cently. I’m per­son­ally op­ti­mistic about non-mar­ket ap­pli­ca­tions like GJP and Me­tac­u­lus. I think that the path for similar groups to pay fore­cast­ers is much more straight­for­ward than similar in pre­dic­tion mar­kets. I think there could be a lot more good work in this area.

2. GJP charges sev­eral thou­sand per ques­tion, but Me­tac­u­lus is free, as­sum­ing they ac­cept your ques­tions. I think the an­swer to this is very com­pli­cated; there are many vari­ables at play. That said, I think that with a pow­er­ful sys­tem, $50k-500k per year in pre­dic­tions could get a pretty sig­nifi­cant in­for­ma­tional re­turn.

3. This is also a very vague ques­tion, it’s not ob­vi­ous what met­rics to use to best an­swer it. That said, if a good pre­dic­tion sys­tem is made, it could help an­swer this ques­tion in spe­cific quan­ti­ta­tive ways. It seems to me that a ro­bust pre­dic­tion sys­tem should be roughly at least as ac­cu­rate as a non-pre­dic­tive sys­tem with the same peo­ple. Long-term pre­dic­tions are tricky, but I think we could have some ba­sic es­ti­mates of bias.

4. This is also a huge ques­tion. I think there’s a lot of ex­per­i­men­ta­tion yet to be done here on many differ­ent kinds of ques­tions. If we could have meta-pre­dic­tions on things like, “How im­por­tant will we have found this pre­dictable item was to have in the sys­tem”, then we may be able to use the sys­tem to an­swer and op­ti­mize here.

5. I’m not very op­ti­mistic about pre­dic­tion mar­kets. This is of course some­thing that would be nice to for­mally pre­dict in the next 1-3 years.

One op­tion we were look­ing to use at Ver­ity is the ‘con­test’ model—In which an in­ter­ested party can sub­si­dize a par­tic­u­lar ques­tion, and then split the pool be­tween for­cast­ers based on their rep­u­ta­tion/​score af­ter the out­come has come to pass. This helps to sub­si­dize spe­cific pre­dic­tions, rather than sub­si­diz­ing more gen­eral pre­dic­tions when pay­ing peo­ple for their over­all score. It has similar­i­ties to the sub­si­dized pre­dic­tion mar­ket model as well.

Reg­u­la­tions shouldn’t be much of a prob­lem for sub­si­dized pre­dic­tion mar­kets. The reg­u­la­tions are de­signed to pro­tect peo­ple from los­ing their in­vest­ments. You can avoid that by not tak­ing in­vest­ments—i.e. give ev­ery trader a free ac­count. Just make sure any one trader can’t cre­ate many ac­counts.

Alas, it’s quite hard to pre­dict how much it will cost to gen­er­ate good pre­dic­tions, re­gard­less of what ap­proach you take.