PeterMcCluskey

Reg­u­la­tions shouldn’t be much of a prob­lem for sub­si­dized pre­dic­tion mar­kets. The reg­u­la­tions are de­signed to pro­tect peo­ple from los­ing their in­vest­ments. You can avoid that by not tak­ing in­vest­ments—i.e. give ev­ery trader a free ac­count. Just make sure any one trader can’t cre­ate many ac­counts.

Alas, it’s quite hard to pre­dict how much it will cost to gen­er­ate good pre­dic­tions, re­gard­less of what ap­proach you take.

Drexler would dis­agree with some of Richard’s phras­ing, but he seems to agree that most (pos­si­bly all) of (some­what mod­ified ver­sions of) those 6 rea­sons should cause us to be some­what wor­ried. In par­tic­u­lar, he’s pretty clear that pow­er­ful util­ity max­imisers are pos­si­ble and would be dan­ger­ous.

I think it’s more ap­pro­pri­ate to use Bostrom’s Mo­ral Par­li­a­ment to deal with con­flict­ing moral the­o­ries.

Your ap­proach might be right if the the­o­ries you’re com­par­ing used the same con­cept of util­ity, and merely dis­agreed about what peo­ple would ex­pe­rience.

But I ex­pect that the con­cept of util­ity which best matches hu­man in­ter­ests will say that “in­finite util­ity” doesn’t make sense. There­fore I treat the word util­ity as refer­ring to differ­ent phe­nom­ena in differ­ent the­o­ries, and I ob­ject to com­bin­ing them as if they were the same.

Similarly, I use a deal­ist ap­proach to moral­ity. If you show me an ar­gu­ment that there’s an ob­jec­tive moral­ity which re­quires me to in­crease the prob­a­bil­ity of in­finite util­ity, I’ll still ask what would mo­ti­vate me to obey that moral­ity, and I ex­pect any re­s­olu­tion of that will in­volve some­thing more like Bostrom’s par­li­a­ment than like your ap­proach.

I don’t iden­tify 100% with fu­ture ver­sions of my­self, and I’m some­what self­ish, so I dis­count ex­pe­riences that will hap­pen in the dis­tant fu­ture. I don’t ex­pect any set of pos­si­ble ex­pe­riences to add up to some­thing I’d eval­u­ate as in­finite util­ity.

For things like nu­clear war or fi­nan­cial melt­down, we’ve got lots of rele­vant data, and not too much rea­son to ex­pect new risks. For ad­vanced nan­otech­nol­ogy, I think we are ig­no­rant enough that a 10% chance sounds right (I’m guess­ing it will take some­thing like $1 billion in fo­cused fund­ing).

With AGI, ML re­searchers can be in­fluenced to change their fore­cast by 75 years by sub­tle changes in how the ques­tion is worded. That sug­gests un­usual un­cer­tainty.

We can see from Moore’s law and from ML progress that we’re on track for some­thing at least as un­usual as the in­dus­trial rev­olu­tion.

The stock and bond mar­kets do provide some ev­i­dence of pre­dictabil­ity, but I’m un­sure how good they are at eval­u­at­ing events that hap­pen much less than once per cen­tury.

How strictly do you mean when you say “prov­ably safe”? That seems like an area where all AI safety re­searchers are hes­i­tant to say how high they’re aiming.

And by “have it im­ple­mented”, do you mean fully de­velop it own their own, or do you in­clude sce­nar­ios where they con­vey keys in­sights to Google, and thereby cause Google to do some­thing safer?

I think mar­kets that have at least 20 peo­ple trad­ing on any given ques­tion will on av­er­age be at least as good as any al­ter­na­tive.

Your com­ments about su­perfore­cast­ers sug­gest that you think what mat­ters is hiring the right peo­ple. What I think mat­ters is the in­cen­tives the peo­ple are given. Most or­ga­ni­za­tions pro­duce bad fore­casts be­cause they have goals which dis­tract peo­ple from the truth. The biggest gains from pre­dic­tion mar­kets are due to re­plac­ing bad in­cen­tives with in­cen­tives that are closely con­nected with ac­cu­rate pre­dic­tions.

There are mul­ti­ple ways to pro­duce good in­cen­tives, and for in­ter­nal office pre­dic­tions, there’s usu­ally some­thing sim­pler than pre­dic­tion mar­kets that works well enough.

It does seem like there are im­por­tant ar­eas where med­i­cal re­search is in­ad­e­quate. I’ll sug­gest that part of the prob­lem is in­ad­e­quate effort de­voted to treat­ments that aren’t pro­tected by patents.

It looks like some un­known frac­tion of ME/​CFS is caused by low thy­roid hor­mone lev­els. “Sub­clini­cal” hy­pothy­roidism has symp­toms that are pretty similar to those of ME/​CFS. They are usu­ally dis­t­in­guished by TSH tests. [TSH is the stan­dard mea­sure of thy­roid lev­els; there are a num­ber of other op­tions, none of which are ideal].

Here’s spec­u­la­tion that we should dis­trust TSH re­sults. (There’s a more de­tailed and very ver­bose ver­sion of that spec­u­la­tion here).

There’s plenty of con­fu­sion about when it’s wise to in­crease a pa­tient’s thy­roid hor­mone. E.g. this small RCT study which gave a stan­dard T4 dose, rather than ad­just­ing the dose to achieve some mea­sure of op­ti­mal hor­mone lev­els. The re­ported TSH lev­els of 0.66 in pa­tients re­ceiv­ing T4 sug­gest that many pa­tients got more than the op­ti­mal dose, and/​or didn’t con­vert T4 to T3 well.

In con­tract, two smaller un­con­trol­led stud­ies (here00014-0/​ab­stract) and here) re­ported good re­sults from T3 treat­ment for treat­ment-re­sis­tant de­pres­sion (H/​T Sarah Con­stantin). Plus there are lots of anec­do­tal re­ports of benefits (see mine here).

There are real dan­gers from over­doses, and it’s un­clear how well re­searchers have mea­sured the benefits, so it’s easy to imag­ine that most doc­tors are erring on the side of in­ac­tion.

My in­tu­ition says that there’s plenty of room for mak­ing pro­to­cols that more safely de­ter­mine the op­ti­mal dose. I don’t have enough ex­per­tise to es­ti­mate how tractable that is.

Another area where EAs might pos­si­bly provide an im­por­tant benefit is Alzheimer’s. There have been some re­centclaims that there are strate­gies which sub­stan­tially pre­vent Alzheimer’s or re­verse it in early stages. As far as I can tell, these claims aren’t prompt­ing as much re­search as they de­serve.

Some parts of those strate­gies are backed by small RCTs pub­lished in 2013 and 2012, and yet the first Google search re­sult for Alzheimer’s is still a page that says Alzheimer’s “can­not be pre­vented, cured or even slowed”.

I ex­pect good re­search about Alzheimer’s to be too ex­pen­sive for EAs to fund di­rectly, but it seem like we should be able to do some­thing to nudge ex­ist­ing re­search fund­ing into bet­ter di­rec­tions.

colon­i­sa­tion of the Su­per­cluster could have a very low prob­a­bil­ity.

What do you mean by very low prob­a­bil­ity? If you mean a one in a mil­lion chance, that’s not im­prob­a­ble enough to an­swer Bostrom. If you mean some­thing that would ac­tu­ally an­swer Bostrom, then please re­spond to the SlateS­tarCodex post Stop adding ze­roes.

I think Bostrom is on the right track, and that any anal­y­sis which fol­lows your ap­proach should use at least a 0.1% chance of more than 10^50 hu­man life-years.

You claim this is non-par­ti­san, yet you make highly par­ti­san claims, such as “con­ser­va­tives have re­lied much more on lies” (you cite Trump’s lies, but treat­ing Trump as a con­ser­va­tive is ob­jec­tion­able to many con­ser­va­tives).

Voter reg­is­tra­tion has similar prob­lems with es­ti­mat­ing how it af­fects goals such as lives saved, but seems to be miss­ing an anal­y­sis of why the ex­pected num­ber of lives saved is pos­i­tive or nega­tive.

I’ll guess that the most im­por­tant effects of this would be to in­fluence which species get up­loaded when, re­duc­ing the chances that the world will be ruled by up­loaded bono­bos, and in­creas­ing the chance of non­pri­mates rul­ing.

On the Nymex, they cur­rently go out to Dec 2024. That con­tract ap­pears to trade less than once a week.

There might be oc­ca­sional con­tracts for more dis­tant years traded be­tween in­sti­tu­tional in­vestors that don’t get pub­li­cly re­ported, but the low vol­ume on pub­li­cly traded con­tracts sug­gests peo­ple just aren’t in­ter­ested in trad­ing such con­tracts.