There is a blog that I would care never to read again, even in moderation. I added the blog to my localhost list so now I can't visit the blog anymore. But my lizard-brain has found a workaround: if I google the blog I can read Google's cache. Is there a way to block Google's cache of the blog without blocking the rest of Google's functions?

I had a related questions which may be of import for neuros. I heard that a significant part of our nervous system lies in the guts, and is sometimes called "the second brain" in jest. As another example, I'm an amateur musician, and I'm a bit worried that semi-automatic processing of finger motion may lie on nervous ganglia well outside my main brain, closer to my fingers (which may be necessary to play impossible pieces while smiling —no, I'm not that good). It would be a chore to learn basic movements again.

My question is, do we have any evidence about whether important information that cannot be recovered from stem cells, may lie outside our skull? (Which is not the same as saying the brain holds most such information.) Stated a bit differently, do we have reasons to think that 1.000 fidelity for neuros is impossible, even in principle?

If you're interested in experiencing what an actual D&D session is like without having to actually play in one, there are a number of actual play podcasts that are essentially recordings of peoples sessions on the internet.

Yes, Alexei (aka Bent Spoon Games) and I talked about the name recently; to promote its use in university courses teaching Bayesian statistics, we're sticking with Credence Game. Confidence means something slightly different in statistics, and the game is meant to teach not just calibration, but also the act of measuring belief strength itself. The name update on BSG, and in the app itself as downloaded from there, will happen soon enough.

Question 1: This depends on the technical details of what has been lost. If it merely an access problem: if there are good reasons to believe that current/future technologies of this resurrection society will be able to restore my faculties post-resurrection, I would be willing to go for as low as .5 for the sake of advancing the technology. If we are talking about permanent loss, but with potential repair (so, memories are just gone, but I could repair my ability to remember in the future) probably 9.0. If the difficulties would literally be permanent, 1.0, but that seems unlikely.

Question 2: Outside of asking me or my friends/family (assume none are alive or know the answer) the best they could do is construct a model based on records of my life, including any surviving digital records. It wouldn't be perfect, but any port in a storm...

Question 3: Hm. Well, if it was possible to revive someone who already was in the equivalent state before cryonics, it would probably be ethical provided that it didn't make them WORSE. Assuming it did... draw lots. It isn't pretty, but unless you privledge certain individuals, you end up in a stalemate. (This is assuming it is a legitimate requirement: all other options have been effectively utilized to their maximum benefit, and .50 is the best we're gonna get without a human trial) A model of the expected damage, the anticipated recovery period, and what sorts of changes will likely need to be made over time could make some subjects more viable for this than others, in which case it would be in everyone's interest if the most viable subjects for good improvements were the ones thrown into the lots. (Quality of life concerns might factor in too: if Person A is 80% likely to come out a .7 and 20% likely to come out a .5; and Person B is 20% likely to come out a .7 and 80% likely to come out a .5, then ceteris paribus you go for A and hope you were right. It is unlikely that all cases will be equal.)

Suppose you are an anti-natalist, what does efficent charity look like then? What is the most cost effective way to reduce the number of births? I imagine giving out cheap birth control in places undergoing a demographic transition is pretty ok?

straight number of births isn't the right metric you need number of births times misery per birth minus opportunity cost of one less person

I've always found it funny how modern society is basically formally libertarian about sex and not nearly anything else. And how deontological Libertarians basically treat everything with the same ethical heuristics modern society uses for sex.

"Anything between consenting adults." and "The state has no buisness in my bedroom." don't seem like things that would only make sense for sex and the bedroom and practically nowhere else. This observation moved me towards thinking they make less sense for sex and the bedroom and more sense for other things than my society thought.

Now obviously our society isn't really libertarian about sexuality. We seem to regulate to death with social and legal norms nearly every aspect of interhuman interaction that is related to sex but isn't sex. This contributes to the desirability of a bare bones approach to sex logistics, the one night stand, if one is doing cost benefit analysis.

Another thought: once you have a large bank of questions, consider "theme questions" as something people can buy with coins. Yes, that becomes a matter of showing off rather than the main point, but people LIKE to show off.

I like the name "Confidence Game"- reminds people of a con game while informing you as to the point of the game.

Try to see if you can focus on a positive-point scale, if you can. Try to make it so that winning nets you a lot of points but "losing" only a couple. (Same effect on scores, either way) This won't seem as odd if you set it up as one long scale rather than two shorter ones: so 99-90-80-60-50-60-80-90-99.

Setting it to a timer will make it ADDICTIVE. Set it up in quick rounds. Make it like a quiz show. No question limit, or a bonus if you hit the limit for being "Quick on your feet." Make it hard but not impossible to do.

Set up a leaderboard where you can post to FB, show friends, and possibly compare your score to virtual "opponents" (which are really just scoring metrics) Possibly make those metrics con-man themed, keeping with the game's name.

Graphics will help a lot. Consider running with the con-game theme.

Label people: maybe something like "Underconfident" "Unsure" "Confident" "AMAZING" "Confident" "Overconfident" "Cocksure" (Test labels to see what works well!) rather than using graphs. Graphs and percentages? Turn-off. Drop the % sign and just show two numbers with a label. Make this separate from points but related. (High points=greater chance of falling toward the center, but in theory not necessarily the same.) Yes, I know the point is to get people to think in percentages, but if you want to do that you have to get them there without actually showing them math, which many find off-putting.

Set up a coin system that earns you benefits for putting into the game: extended round, "confidence streak" bonuses, hints, or skips might be good rewards here. Test and see what works. Allow people to pay for coins, but also reward coins for play or another mini-game related to play or both. (Investment=more play)

Is there any catastrophic risk that a Mars colony mitigates against that isn't also mitigated by a self-sufficient, self-powered (e.g. geothermal) deep undergound colony with enforced long quarantine periods ?

Good point. Mars would only be better off if the colonies over-engineered their radiation protection. Otherwise anything that gets through Earth's natural protection would probably get through Martian settlements designed to give the same level of protection. It might be relatively cheap to over-engineer (e.g. digging in an extra meter), but it might not.

The difference between 60% credence and 80% credence seems much smaller to me than the difference between 90% and 99%. Is there a reason there's no option between 90% and 99%? In your testing, have you found any well-calibrated users who answer 99% a non-trivial fraction of the time?

Upvoted, and you're right, of course. In fact, I created much of this list by looking at Goodreads, and the lw textbook thread, and Eliezer's bookshelf, and the SI reading list, etc, and cherry-picking what I was interested in. I was more soliciting commentary on those specific books, than just general recommendations per se.

I assumed the odds were that most of Lesswrong wouldn't have read most of them, or just wouldn't want to bother, which would be understandable. Honestly, I wasn't expecting too much from posting this. I figured if I could improve on or drop one book from that list it would be worth it.

Edited to add: I looked at your favorites list and will probably make a few additions

I would recommend against The Holographic Universe. A relative read it and apparently it talks a lot about very woo-ish subjects. Whenever I've disputed it's claims, I've found it to be very poorly sourced.

Rather than considering it in terms of fatality rate, consider it in terms of curtailing humanity's possible expansion into the universe. The Industrial Revolution was possible because of abundant coal, and the 20th century's expansion of technology was possible because of petroleum. The easy-access coal and oil are used up; the resources being used today would not be accessible to a preindustrial or newly industrial civilization. So if our civilization falls and humanity reverts to preindustrial conditions, it stays there.

A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity - a HTC isn't in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years' worth of heat production to eject each opening).

Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.

I'm not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel's report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?

Good questions. I don't know the answers. But like you say, UDT especially is basically defined circularly - where the agent's decision is a function of itself. Making this coherent is still an unsolved problem. So I was wondering if we could get around some of the paradoxes by giving up on certainty.

That's an explicit assumption of the hypothetical - "The technology will not progress in refinement without practice, and practice requires actually restoring cryogenically frozen human brains." Suppose that the process requires a lot of recalibration between species, and tends to fail more for brains with more convolutions and synaptic density.

I can see a global computer catastrophe rising to the level of civilization-ending, and 90-99% fatality rate, if I squint hard enough. I could see the fatality rate being even higher if it happens farther in the future. I'm having trouble seeing it as an existential risk, that literally kills enough people that there is no viable population remaining anywhere. Even in the case of computer catastrophe as malicious event, I'm having trouble envisioning an existential risk that doesn't also include one of the other options.

Are there papers that make the case for computer catastrophe as X-risk?

The rule book is there to resolve conflict, mainly in terms of combat. If you're familiar with the kid's game of cops and robbers, it's to make sure there's no arguments about "Bang! I shot you!" "No, I should you first!". The majority of mechanics are of this nature, and the rest of the book is less rules than a description of a fantasy world for players to build off of and improvise within.

In general it's fairly boisterous, and the communal nature of the game means there aren't a lot of gaps. You can do your thinking during the times other players are talking about their decisions or when the monsters are acting or when the DM is explaining, so if you're playing with people who are experienced there aren't a lot of long pauses. Watching from the sidelines is pretty unexciting because most people, while they put some effort into acting, aren't that great, so if you lack the emotional connection with the characters and situations and achievements it's just not that good.

Re: Shy outcasts. A lot of shy outcasts really enjoy the opportunity to act like NOT shy outcasts. DND is normally played in a safer environment where social experimentation is not just encouraged but pretty much required. Pretty much no one CHOOSES to be a shy outcast so much as they're forced to inhabit that corner of existence by everyone else. Being the center of attention of a bunch of people who you respect and who respect you is a lot more pleasant than being the center of attention of people who are primed to mock and belittle you.

Even the so-called Embarrassingly parallel problems, those whose theoretical performance scales almost linearly with the number of cpus, in practice scale sublinearly in the amount of work done per dollar: massive parallelization comes with all kinds of overheads, from synchronization to cache contention to network communication costs to distributed storage issues. More trivially, large data centers have significant heat dissipation issues: they all need active cooling and many are also housed in high-tech buildings specifically designed to address this issue. Many companies even place data centers in northern countries to take advantage of the colder climate, instead of putting them in, say, China, India or Brazil where labor costs much less.

Problems that are not embarrassingly parallel are limited by Amdahl's law: as you increase the number of cpus, the performance quickly reach an asymptote where the sequential parts of the algorithms dominate.

I can't help but think that there being no obvious candidates means the candidates wouldn't be fantastically useful.

Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.

I would say that if you don't want to be thought of as the sort of person who propagates odious bullshit, the very first thing to do would be not to propagate odious bullshit, not to complain that the person who called you out on propagating odious bullshit didn't touch third base. But perhaps that's just me.

"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."

Do you mean that people don't care if they are philosophical zombies or not?

If you look above, you'll note that the statement you've quoted was in response to your claim that "people want is a living conscious artificial mind" and my sentence after the one you are quoting is also about AI. So if it helps, replace "it" with "functional general AI" and reread the above. (Although frankly, I'm confused by how you interpreted the question given that the rest of your paragraph deals with AI.)

But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is "no". While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn't something that's widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn't impact how they think they should be treated.

The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.

If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I'm not sure what you mean by "strong AI proponents" in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That's how for example we now have practical systems with neural nets that are quite helpful.

Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?

So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn't involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.

"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."

Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given.

I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn't need an explanation?

Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."

That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.

Ok. I think I'm beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn't making any claim about "intent" at all. Behaviorism just tries to talk about behavior. Similarly "decides" isn't a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn't a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn't to behaviorism, but rather one's asserting a strong notion of free will.

I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test

It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I'm familiar with Block's look-up table argument. It isn't clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn't an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.

Is "Blockhead" (the name affectionately given to this robot) conscious?

No it is not.

So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.

I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.

Massive illusion of transparency here- you're presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.

Someone who is familiar with the relevant cognitive science is encouraged to correct me if it turns out that my current contrarian opinion is merely the result of my ignorance, but---I'm inclined to just call that a cognitive disability. To be sure, if you happen to be so lucky as to have a domain expert nearby who is willing to spend time with you to clear up your misconceptions, then that's a wonderful resource and you should take advantage of it. But human labor is expensive and text is cheap; people who understand something deeply enough to teach it well have better things to do with their lives than give the same lecture dozens of times. What happens when you want to know something that no one is willing to teach you (at an affordable price)? To be so incompetent at reading as to actually be dependent on a flesh-and-blood human to talk you through every little step every time you want to understand something complicated is a crippling disability, much much worse than not being able to walk. I weep for those who are cursed to live with such a hellishly debilitating condition, and look forward to some future day when our civilization's medical technology has advanced enough to cure this awful disease.

As far as I see around, there are people with various optimal bite sizes.

For something I do want to consume in entirety, I prefer long-form writing; there are people who prefer smaller-sized pieces or smaller-sized pieces with a rare chance to interrupt and ask a question.

I learn better from text; there are people who understand spoken words better. Spoken words have intonations and emotional connotations (and often there are relevant gestures at the same time); text reading speed can be changed without any loss.

So, I wouldn't discount the option that another form of presentation can be hypothetically interesting to some 10% of population. It would be just one separate thing for the mto consider, of course.

Supernova, GRB : probably ? Unlike impactors, a supernova or GRB would affect both Earth and Mars. However, if the major impact on Earth is deaths by radiation of exposed people and destruction of agriculture by destruction of the ozone layer, then Mars should be much more resilient, since settlements have to be more radiation hardened anyway, and the agriculture would be under glass or under ground.

Is not a good addition. The Mars-hardened facilities will be hardened only for Mars conditions (unless it's extremely easy to harden against any level of radiation?) in order to cut colonization costs from 'mindbogglingly expensive and equivalent to decades of world GDP' to something more reasonable like 'decade of world GDP'. So given a supernova, they will have to upgrade their facilities anyway and they are worse positioned than anyone on Earth: no ozone layer, no atmosphere in general, small resource & industrial base, etc. Any defense against supernova on Mars could be better done on Earth.

My impression, from idly watching sometimes at a science fiction club, is that it's fairly boisterous, and few watch from sidelines (certainly I didn't understand what was happening, although if I had known the rules maybe I would've'd a better chance).

Question 2: Ask people who knew me? Infer a model of my mind from that and my writings? I don't consider it more ethical to use uncertainty as a reason postpone it until some unforeseeable technology is developed.

Question 3: I'm reluctant to enter such a lottery because I don't trust someone who believes those assumptions. I expect the scanning part of the process to improve (without depending on human trials) to the point where enough information is preserved to make a >0.99 fidelity upload theoretically possible. I would accept a trial which took that information and experimented with a simulation of 0.5 fidelity in an attempt to improve the simulation software, assuming the raw information would later be used to produce a better upload.

Something very weird happened to me today after reading this paragraph in the article yesterday:

Another particularly well-documented case of the persistence
of mistaken beliefs despite extensive corrective efforts
involves the decades-long deceptive advertising for Listerine
mouthwash in the U.S. Advertisements for Listerine had falsely
claimed for more than 50 years that the product helped prevent
or reduce the severity of colds and sore throats.

I had not known earlier that Listerine had claimed to alleviate colds and sore throats. Today morning, as I was using my Listerine mouthwash, I felt as though the Listerine was helping my sore throat. Not deliberatively of course, but instantaneously. And my mind also instantaneously constructed a picture where the mouthwash was killing germs in my throat. This happened after I learned about the claim from a source whose only reason for mentioning it was that it was false. From a source about the dangers of misinformation.

Edit: What's in the rule book? If you forget the rule book at home, can you get along or do you have to go back for it?

If there's no books at the table, it depends on whether your fellow players are willing to trust you to remember rules neutrally and if the DM is willing to adjudicate where no one can remember. There's also the online version for most of the core rules, although not all the exotic extra classes and stuff.

If a Mars colony mitigates catastrophic risk (existential / extinction risk?) from climate change, then climate change is not an existential risk to human civilization on earth

This does not follow. One possible (although very unlikely) result of climate change is a much more severe situation resulting in a Venus like situation (although not as high as temp and not as much nasty stuff in the atmosphere). If that happens, Mars will be much easier to survive on than Earth, since with a lot of energy from nuclear power, extremely cold environments are much more hospitable than extremely hot environments. Current models makes such a strong runaway result unlikely, but it is a possibility.

Okay, so would you kindly point to some awful, worthless posts/comments by those awful, worthless people? And explain what makes them so awful and worthless? So that the right-thinking users can learn to avoid them?

Or, if you don't have anything specific in mind, would you at least cease insulting the community?

It depends on the technology and the actual risks, but yes it makes more sense to start with the best preserved, because after everything has been cleared up you will have less completely messed up people and also because the technology will most probably improve faster if at the beginning you start using it on better preserved people, because there are less factors to worry about.

The Mars colony could be useful to test the tools necessary to overcome the hostile climate, and it could make their development (possibly mass development) a higher priority.

So in case the Earth climate starts to change very rapidly, we would have a choice to use already developed and tested equipment, built in existing factories, instead of trying to invent it amidst global chaos.

OK, makes sense. If we assume the AI to be perfectly rational, it would probably give exterminating humanity out of Earth high priority, exactly because there is a chance of them building another AI.

However, to wipe out humanity from the Earth, the AI does not have to be very smart. One virus, well designed and well distributed, could do the job. An AI with some bugs could still be capable to make it... and then fail to properly arrange the space attack, or destroy itself by wrong self-modification.

A lot of the time you waste doing those things was already wasted. For instance I am posting this while waiting in a drive through for breakfast, but this is exactly when I would be playing random game if I was not posting here.

Edit: And well designed smartphone games (which is not all of them) load shockingly fast. I have actually played smartphone games while waiting for other slower games to load on my computer.

If a Mars colony mitigates catastrophic risk (extinction risk?) from climate change,
then climate change is not an existential risk to human civilization on earth.

If humans can thrive on Mars, Earth based humanity will be able to cope with any climate change less drastic than transforming the climate of Earth to something as hostile as the current climate of Mars.

Preliminary note: While my assertion concerning uFAI x-risk reduction is certainly fair game for debate, it is ancillary to my main interest in this topic, which is overall x-risk reduction from all sources. That being said, I do think that the uFAI specific x-risk reduction is non-negligible, though I do agree it may well be minor.

Why should be an uFAI which ignores space travel more likely than an uFAI which ignores people dressed in green?

Two broad categories of explaining such a difference:

The advent of uFAI may lead to a series of events (e.g. nuclear winter) that 1) preclude the uFAI from pursuing space travel for the time being, 2) lead to the mutual demise of both humankind and the uFAI or 3) lead to a situation in which the cost/benefit analysis on the uFAI's part does not come out in favor of wiping out a Mars colony or 4) leaves the Mars colony enough time to implement counter measures of some sort, up to and including creating a friendly AI to protect them.

The utility function (which may well be a somewhat random one implemented by a researcher unwittingly creating the first AGI) could well yield strange results, especially if it is not change-invariant. For example, it may have an emphasis on building tools unable to achieve space flight (maybe the uFAI was originally supposed to only build as many cars as possible, favoring certain tools), be only concerned with the planet earth ("Save the planet"-type AI) or - as mentioned - be incapable of pursuing long-term plans due to geometric discounting of future rewards, and there always being something to optimize which only takes short-term planning (i.e. locked in a greedy pattern).

All of which is of course speculative, but a uFAI taking to the stars has "more"* scenarios going against it than a uFAI ignoring people dressed in green. (* in terms of composite probability, the number of scenarios is countably infinite for both)