(tl;dr: In this post I try to explain why I think the stopping rule of an experiment matters. It is likely that someone will find a flaw in my reasoning. That would be a great outcome as it would help me change my mind. Heads up: If you read this looking for new insight you may be disappointed to only find my confusion)

A few weeks ago I started reading beautiful probability and immediately thought that Eliezer is wrong about the stopping rule mattering to inference. I dropped everything and spent the next three hours convincing myself that the stopping rule doesn't matter and I agree with Jaynes and Eliezer. As luck would have it, soon after that the stopping rule question was the topic of discussion at our local LW meetup. A couple people agreed with me and a couple didn't and tried to prove it with math, but most of the room seemed to hold a third opinion: they disagreed but didn't care to find out. I found that position quite mind-boggling. Ostensibly, most people are in that room because we read the sequences and thought that this EWOR (Eliezer's Way Of Rationality) thing is pretty cool. EWOR is an epistemology based on the mathematical rules of probability, and the dude who came up with it apparently does mathematics for a living trying to save the world. It doesn't seem like a stretch to think that if you disagree with Eliezer on a question of probability math, a question that he considers so obvious it requires no explanation, that's a big frickin' deal!

First, I'd like to point out that the mainstream academic term for Eliezer's claim is The Strong Likelihood Principle. In the comments section, a vigorous discussion of stopping rules ensued.

My own intuition is that the strong likelihood principle is wrong. Moreover, there exist a small number of people whose opinion I give higher level of credence than Eliezer's, and some of those people also disagree with him. For instance, I've been present in the room when a distinguished Professor of Biostatistics at Harvard stated matter-of-factly that the principle is trivially wrong. I also observed that he was not challenged on this by another full Professor of Biostatistics who is considered an expert on Bayesian inference.

So at best, the fact that Eliezer supports the strong likelihood principle is a single data point, ie pretty weak Bayesian evidence. I do however value Eliezer's opinion, and in this case I recognize that I am confused. Being a good rationalist, I'm going to take that as an indication that it is time for The Ritual. Writing this post is part of my "ritual": It is an attempt to clarify exactly why I think the stopping condition matters, and determine whether those reasons are valid. I expect a likely outcome is that someone will identify a flaw in my reasoning. This will be very useful and help improve my map-territory correspondence.

--

Suppose there are two coins in existence, both of which are biased: Coin A comes up heads with probability 2/3 and tails with probability 1/3, whereas Coin B comes up heads with probability 1/3. Someone gives me a coin without telling me which one, my goal is to figure out if it is Coin A or Coin B. My prior is that they are equally likely.

There are two statisticians who both offer to do an experiment: Statistician 1 says that he will flip the coin 20 times and report the number of heads. Statistician 2 would really like me to believe that it is Coin A, and says he will terminate the experiment whenever there are more heads than coins. However, since Statistician 2 is kind of lazy and doesn't have infinite time, he also says that if he reaches 20 flips he is going to call it quits and give up.

Both statisticians do the experiment, and both experiments end up with 12 coins and 8 heads. I trust both Statisticians do be honest about the experimental design and the stopping rules.

In the experiment of Statistician 1, the probability of getting this outcome if you have Coin A was 0.1486, whereas the probability of getting this outcome if it was Coin B was 0.0092. The likelihood ratio is therefore 16.1521 and the posterior probability of Coin A (after converting the prior to odds, applying the likelihood ratio and converting back to probability) is 0.94.

In the experiment of Statistician 2, however, I can't just use the binomial distribution because there is an additional data point which is not Bernoulli, namely the number of coin flips. I therefore have to calculate, for both Coin A and Coin B, the probability that he would not terminate the experiment prior to the 20th flip, and that at that stage he would have 12 heads and 8 coins. Since the probability reaching 20 flips is much higher for Coin A than for Coin B, the likelihood ratio would be much higher than in the experiment of Statistician 1.

This should not be unexpected: If Statistician B gives me data that supports the hypothesis which his stopping rule was designed to discredit, then that is stronger evidence than similar data coming from the neutral Statistician A.

In other words, the stopping rule matters. Yes, all the evidence in the trial is still in the likelihood ratio, but the likelihood ratio is different because there is an additional data point. Not considering this additional data point is statistical malpractice.

Hello! I'm running an Ideological Turing Test for my local rationality group, and I'm wondering what ideology to use (and what prompts to use for that ideology). Palladias has previously run a number of tests on Christianity, but ideally I'd find something that was a good 50/50 split for my community, and I don't expect to find many Christians in my local group. The original test was proposed for politics, which seems like a reasonable first-guess, but I also worry that my group has too many liberals and not enough conservatives to make that work well.

What I plan to do is email the participants who have agreed to write entries asking how they stand on a number of issues (politics, religion, etc) and then use the issue that is most divisive within the population. To do that, however, I'll need a number of possible issues. Do any of you have good ideas for ITT domains other than religion or politics, particularly for rationalists?

(Side questions:

I've been leaning towards using the name "Caplan Test" instead of "Ideological Turing Test". I think the current name is too unwieldy and gives the wrong impression. Does the ITT name seem worth keeping?

Also, would anyone on here be interested in submitting entries to my test and/or seeing results?)

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on". Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.

If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.

Please post only under one of the already created subthreads, and never directly under the parent media thread.

Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.

Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Reading up on the GiveWell Open Philanthropy Project's investigation of science policy lead me to look up CRISPR which is given as the example of a very high potential basic science research area.

In context, Givewell appears to be interested in the potential for Gene drive. I am not sure if I am using the term in a grammatically correct way.

Austin Burt, anevolutionary geneticistatImperial College London,[5]first outlined the possibility of building gene drives based on natural "selfish"homing endonucleasegenes.[4]Researchers had already shown that these“selfish” genescould spread rapidly through successive generations. Burt suggested that gene drives might be used to prevent a mosquito population from transmitting themalaria parasiteor crash a mosquito population. Gene drives based on homing endonucleases have been demonstrated in the laboratory intransgenicpopulations of mosquitoes[6]and fruit flies.[7][8]These enzymes could be used to drive alterations through wild populations.[1]

I would be suprised if I am the first community member to ponder whether we could just go ahead and exterminate mosquito's to control their populations. Google research I conducted ages ago indicated that doing so resulted in no effective improvement in desired outcomes over the long term. I vaguely remember several examples cited, none of which were Gene Driving, which I have only just heard of. I concluded, at the time, that controlling mosquito populations wasn't the way to go, and instead people should proactively protect themselves.

In 2015, study in Panama reported that such mosquitoes were effective in reducing populations of dengue fever-carrying Aedes aegypti. Over a six month period approximately 4.2 million males were released, yielding a 93-percent population reduction. The female is the disease carrier. The population declined because the larvae of GM males and wild females fail to thrive. Two control areas did not experience population declines. The A. aegypti were not replaced by other species such as the aggressive A. albopictus. In 2014, nine people died and 5,026 were infected, and in 2013 eight deaths and 4,481 infected, while in March 2015 a baby became the year's first victim of the disease.[9]

It's apparent that research is emerging for the efficacy of Gene Driving. In conducting research for this discussion post, I found most webpages in top google results were from groups and individuals concerned about genetically modified mosquitos being released. I am interested in know if that's the case for anyone else, since my results may be biased by google targeting results based on my past proclivity for using google-searching to confirm suspicions about things I already had.

It appears that the company responsible for the mosquitos is called Oxitec. I have no conflict of interest to disclose in relation to them (though I was hoping to find one, but they're not a publicly listed company!). They appear to be supplying trials in the US and Australia. Though, I haven't looked to see if they're involved in any trials in developing countries. It stuns me that I was not aware of them, given multiple lines of interest that could have brought me to them.

My general disposition towards synthetic biology has been overwhelming suspicious and censorial in the recent past. My views were influenced by the caution I've ported from fears of unfriendly AI. I wanted to share this story of Gene Driving because it is heartwarming and has made me feel better about the future of both existential risk and effective giving.

A 2006 study showed that “280,000 people in the U.S. receive a motor vehicle induced traumatic brain injury every year” so you would think that wearing a helmet while driving would be commonplace. Race car drivers wear helmets. But since almost no one wears a helmet while driving a regular car, you probably fear that if you wore one you would look silly, attract the notice of the police for driving while weird, or the attention of another driver who took your safety attire as a challenge. (Car drivers are more likely to hit bicyclists who wear helmets.)

The $30+shipping Crasche hat is designed for people who should wear a helmet but don’t. It looks like a ski cap, but contains concealed lightweight protective material. People who have signed up for cryonics, such as myself, would get an especially high expected benefit from using a driving helmet because we very much want our brains to “survive” even a “fatal” crash. I have been using a Crasche hat for about a week.

There are some long lists of false beliefs that programmers hold. isn't because programmers are especially likely to be more wrong than anyone else, it's just that programming offers a better opportunity than most people get to find out how incomplete their model of the world is.

I'm posting about this here, not just because this information has a decent chance of being both entertaining and useful, but because LWers try to figure things out from relatively simple principles-- who knows what simplifying assumptions might be tripping us up?

The classic (and I think the first) was about names. There have been a few more lists created since then.

Following on from a few threads about superpowers and extra sense that humans can try to get; I have always been interested in the idea of putting a magnet in my finger for the benefits of extra-sensory perception.

Stories (occasional news articles) imply that having a magnet implanted in a finger in a place surrounded by nerves imparts a power of electric-sensation. The ability to feel when there are electric fields around. So that's pretty neat. Only I don't really like the idea of cutting into myself (even if its done by a professional piercing artist).

Only recently did I come across the suggestion that a magnetic ring could impart similar abilities and properties. I was delighted at the idea of a similar and non-invasive version of the magnetic-implant (people with magnetic implants are commonly known as grinders within the community). I was so keen on trying it that I went out and purchased a few magnetic rings of different styles and different properties.

Interestingly the direction that a magnetisation can be imparted to a ring-shaped object can be selected from 2 general types. Magnetised across the diameter, or across the height of the cylinder shape. (there is a 3rd type which is a ring consisting of 4 outwardly magnetised 1/4 arcs of magnetic metal suspended in a ring-casing. and a few orientations of that system).

I have now been wearing a Neodymium ND50 magnetic ring from supermagnetman.com for around two months. The following is a description of my experiences with it.

When I first got the rings, I tried wearing more than one ring on each hand, I very quickly found out what happens when you wear two magnets close to each other. AKA they attract. Within a day I was wearing one magnet on each hand. What is interesting is what happens when you move two very strong magnets within each other's magnetic field. You get the ability to feel a magnetic field, and roll it around in your hands. I found myself taking typing breaks to play with the magnetic field between my fingers. It was an interesting experience to be able to do that. I also found I liked the snap as the two magnets pulled towards each other and regularly would play with them by moving them near each other. For my experiences here I would encourage others to use magnets as a socially acceptable way to hide an ADHD twitch - or just a way to keep yourself amused if you don't have a phone to pull out and if you ever needed a reason to move. I have previously used elastic bands around my wrist for a similar purpose.

The next thing that is interesting to note is what is or is not ferrous. Fridges are made of ferrous metal but not on the inside. Door handles are not usually ferrous, but the tongue and groove of the latch is. metal railings are common, as are metal nails in wood. Elevators and escalators have some metallic parts. Light switches are often plastic but there is a metal screw holding them into the wall. Tennis fencing is ferrous, the ends of usb cables are sometimes ferrous and sometimes not. The cables are not ferrous. except one I found. (they are probably made of copper)

Breaking technology

I had a concern that I would break my technology. That would be bad. overall I found zero broken pieces of technology. In theory if you take a speaker which consists of a magnet and an electric coil and you mess around with its magnetic field it will be unhappy and maybe break. That has not happened yet. The same can be said for hard drives, magnetic memory devices, phone technology and other things that rely on electricity. So far nothing has broken. What I did notice is that my phone has a magnetic-sleep function on the top left. i.e. it turns the screen off to hold the ring near that point. For both benefit and detriment depending on where I am wearing the ring.

Metal shards

I spend some of my time in workshops that have metal shards lying around. sometimes they are sharp, sometimes they are more like dust. They end up coating the magnetic ring. The sharp ones end up jabbing you, and the dust just looks like dirt on your skin. in a few hours they tend to go away anyways, but it is something I have noticed

magnetic strength

Over the time I have been wearing the magnets their strength has dropped off significantly. I am considering building a remagnetisation jig, but have not started any work on it. obviously every time I ding something against it, every time I drop them - the magnetisation decreases a bit as the magnetic dipoles reorganise.

knives

I cook a lot. Which means I find myself holding sharp knives fairly often. The most dangerous thing that I noticed about these rings is that when I hold a ferrous knife in the normal way I hold a knife, the magnet has a tendency to shift the knife slightly or at a time when I don't want it to. That sucks. Don't wear them while playing with sharp objects like knives. the last think you want to do is accidentally have your carrot-cutting turn into a finger-cutting event. What is interesting as well is that some cutlery is made of ferrous metal and some is not. also sometimes parts of a piece of cutlery are ferrous and some are non-ferrous. i.e. my normal food-eating knife set has a ferrous blade part and a non-ferrous handle part. I always figured they were the same, but the magnet says they are different materials. Which is pretty neat. I have found the same thing with spoons sometimes. the scoop is ferrous and the handle is not. I assume it would be because the scoop/blade parts need extra forming steps so need to be a more work-able metal. Cheaper cutlery is not like this.

The same applies to hot pieces of metal. Ovens, stoves, kettles, soldering irons... When they accidentally move towards your fingers, or your fingers are compelled to be attracted to them. Thats a slightly unsafe experience.

electric-sense

You know how when you run a microwave it buzzes, in a *vibrating* sorta way. if you put your hand against the outside of a microwave you will feel the motor going. Yea cool. So having a magnetic ring means you can feel that without touching the microwave from about 20cm away. There is a variability to it, better microwaves have more shielding on their motors and are leak less. I tried to feel the electric field around power tools like a drill press, handheld tools like an orbital sander, computers, cars, appliances, which pretty much covers everything. I also tried servers and the only thing that really had a buzzing field was a UPS machine (uninterupted power supply). Which was cool. Only other people had reported that any transformer - i.e. a computer charger would make that buzz. I also carry a battery block with me and that had no interesting fields. Totally not exciting. As for moving electrical charge. Cant feel it. If powerpoints are receiving power - nope. not dying by electrocution - no change.

boring superpower

There is a reason I call magnetic rings a boring superpower. The only real super-power I have been imparted is the power to pick up my keys without using my fingers. and also maybe hold my keys without trying to. As superpowers go - thats pretty lame. But kinda nifty. I don't know. I wouldn't insist people do it for the life-changing purposes.

Welcome to the Rationality reading group. This fortnight we discuss Part F: Politics and Rationality (pp. 255-289). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

F. Politics and Rationality

57. Politics is the Mind-Killer - People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food, or let you kill your hated rival. If you must talk about politics (for the purpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it, but don't blame it explicitly on the whole Republican/Democratic/Liberal/Conservative/Nationalist Party.

58. Policy Debates Should Not Appear One-Sided - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.

59. The Scales of Justice, the Notebook of Rationality - People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.

60. Correspondence Bias - Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

61. Are Your Enemies Innately Evil? - People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just. That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy.

62. Reversed Stupidity Is Not Intelligence - The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.

63. Argument Screens Off Authority - There are many cases in which we should take the authority of experts into account, when we decide whether or not to believe their claims. But, if there are technical arguments that are available, these canscreen offthe authority of experts.

64. Hug the Query - The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

65. Rationality and the English Language - George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.

66. Human Evil and Muddled Thinking - It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil.

This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part G: Against Rationalization (pp. 293-339). The discussion will go live on Wednesday, 12 August 2015, right here on the discussion forum of LessWrong.

My name is Andrés Gómez Emilsson, and I'm the former president of the Stanford Transhumanist Association. I just graduated from Stanford with a masters in computational psychology (my undergraduate degree was in Symbolic Systems, the major with the highest LessWronger density at Stanford and possibly of all universities).

I have a request for the LessWrong community: I would like as many of you as possible to fill out this questionnaire I created to help us understand what causes the diversity of values in transhumanism. The purpose of this questionnaire is twofold:

Characterize the state-space of background assumptions about consciousness

Evaluate the influence of beliefs about consciousness, as well as personality and activities, in the acquisition of memetic affiliations

The first part is not specific to transhumanism, and it will be useful whether or not the second is fruitful. What do I mean by the state-space of background assumptions? The best way to get a sense of what this would look like is to see the results of a previous study I conducted: State-space of drug effects. There I asked participants to "rate the effects of a drug they have taken" by selecting the degree to which certain phrases describe the effects of the drug. I then conducted factor analysis on the dataset and extracted 6 meaningful factors accounting for more than 50% of the variance. Finally, I mapped the centroid of the responses of each drug in the state-space defined, so that people could visually compare the relative position of all of the substances in a normalized 6-dimensional space.

I don't know what the state-space of background assumptions about consciousness looks like, but hopefully the analysis of the responses to this survey will reveal them.

The second part is specific to transhumanism, and I think it should concerns us all. To the extent that we are participating in the historical debate about how the future of humanity should be, it is important for us to know what makes people prefer certain views over others. To give you a fictitious example of a possible effect I might discover: It may turn out that being very extraverted predisposes you to be uninterested in Artificial Intelligence and its implications. If this is the case, we could pin-point possible sources of bias in certain communities and ideological movements, thereby increasing the chances of making more rational decisions.

The survey is scheduled to be closed in 2 days, on July 30th 2015. That said, I am willing to extend the deadline until August 2nd if I see that the number of LessWrongers answering the questionnaire is not slowing down by the 30th. [July 31st edit: I extend the deadline until midnight (California time) of August 2nd of 2015.]

Thank you all!

Andrés :)

Here are some links about my work in case you are interested and want to know more:

A lot of people value indefinite life extension, but most have their own preferred method of achieving it. The goal of this map is to present all known ways of radical life extension in an orderly and useful way.

A rational person could choose to implement all of these plans or to concentrate only on one of them, depending on his available resources, age and situation. Such actions may be personal or social; both are necessary.

The roadmap consists of several plans; each of them acts as insurance in the case of failure of the previous plan. (The roadmap has a similar structure to the "Plan of action to prevent human extinction risks".) The first two plans contain two rows, one of which represents personal actions or medical procedures, and the other represents any collective activity required.

Plan A. The most obvious way to reach immortality is to survive until the creation of Friendly AI; in that case if you are young enough and optimistic enough, you can simply do nothing – or just fund MIRI. However, if you are older, you have to jump from one method of life extension to the next as they become available. So plan A is a relay race of life extension methods, until the problem of death is solved.

This plan includes actions to defeat aging, to grow and replace diseased organs with new bioengineered ones, to get a nanotech body and in the end to be scanned into a computer. It is an optimized sequence of events, and depends on two things – your personal actions (such as regular medical checkups), and collective actions such as civil activism and scientific research funding.

Plan B. However, if Plan A fails, i.e. if you die before the creation of superintelligence, there is Plan B, which is cryonics. Some simple steps can be taken now, such as calling your nearest cryocompany about a contract.

Plan C. Unfortunately, cryonics could also fail, and in that case Plan C is invoked. Of course it is much worse – less reliable and less proven. Plan C is so-called digital immortality, where one could be returned to life based on existing recorded information about that person. It is not a particularly good plan, because we are not sure how to solve the identity problem which will arise, and we don’t know if the collected amount of information would be enough. But it is still better than nothing.

Plan D. Lastly, if Plan C fails, we have Plan D. It is not a plan in fact, it is just hope or a bet that immortality already exists somehow: perhaps there is quantum immortality, or perhaps future AI will bring us back to life.

The first three plans demand particular actions now: we need to prepare for all of them simultaneously. All of the plans will lead to the same result: our minds will be uploaded into a computer with help of highly developed AI.

The plans could also help each other. Digital immortality data may help to fill any gaps in the memory of a cryopreserved person. Also cryonics is raising chances that quantum immortality will result in something useful: you have more chance of being cryopreserved and successfully revived than living naturally until you are 120 years old.

I'm sure that many of you here have read Quantum Computing Since Democritus. In the chapter on the anthropic principle the author presents the Dice Room scenario as a metaphor for human extinction. The Dice Room scenario is this:

1. You are in a world with a very, very large population (potentially unbounded.)

2. There is a madman who kidnaps 10 people and puts them in a room.

3. The madman rolls two dice. If they come up snake eyes (both ones) then he murders everyone.

4. Otherwise he releases everyone, then goes out and kidnaps 10 times as many people as before, and returns to step 3.

The question is this: if you are one of the people kidnapped at some point, what is your probability of dying? Assume you don't know how many rounds of kidnappings have preceded yours.

As a metaphor for human extinction, think of the population of this world as being all humans who ever have or ever may live, each batch of kidnap victims as a generation of humanity, and rolling snake eyes as an extinction event.

The book gives two arguments, which are both purported to be examples of Bayesian reasoning:

1. The "proximate risk" argument says that your probability of dying is just the prior probability that the madman rolls snake eyes for your batch of kidnap victims -- 1/36.

2. The "proportion murdered" argument says that about 9/10 of all people who ever go into the Dice Room die, so your probability of dying is about 9/10.

Obviously this is a problem. Different decompositions of a problem should give the same answer, as long as they're based on the same information.

I claim that the "proportion murdered" argument is wrong. Here's why. Let pi(t) be the prior probability that you are in batch t of kidnap victims. The proportion murdered argument relies on the property that pi(t) increases exponentially with t: pi(t+1) = 10 * pi(t). If the madman murders at step t, then your probability of being in batch t is

pi(t) / SUM(u: 1 <= u <= t: pi(u))

and, if pi(u+1) = 10 * pi(u) for all u < t, then this does indeed work out to about 9/10. But the values pi(t) must sum to 1; thus they cannot increase indefinitely, and in fact it must be that pi(t) -> 0 as t -> infinity. This is where the "proportion murdered" argument falls apart.

This is the public group rationality diary for July 12th - August 1st, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like:

Established a useful new habit

Obtained new evidence that made you change your mind about some belief

Decided to behave in a different way in some set of situations

Optimized some part of a common routine or cached behavior

Consciously changed your emotions or affect with respect to something

Consciously pursued new valuable information about something that could make a big difference in your life

Learned something new about your beliefs, behavior, or life that surprised you

Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Note to future posters: no one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. It should run for about two weeks, finish on a Saturday, and have the 'group_rationality_diary' tag.

I propose 3 areas of defining thinking "past", "future", "present". followed by a hard question.

Past

This can be classified as any system of review, any overview of past progress, and any learning from the past broadly including history, past opportunities or challenges, shelved projects, known problems and previous progress. Where a fraction of your time should be spent in the process of review in order to influence your plan for the future.

Future

Any planning-thinking tasks, or strategic intention about plotting a course forward towards a purposeful goal. This can overlap with past-strategising by the nature of using the past to plan for the future.

Present

These actions include tasks that get done now, This is where stuff really happens; (technically both past-thinking and future-thinking classify as something you can do in the present, and take up time in the present, but I want to keep them apart for now) This is the living-breathing getting things done time. the bricks-and mortar of actually building something; creating and generating progress towards a designated future goal.

The hard question

I am stuck on finding a heuristic or estimate for how long should be spent in each area of being/doing. I reached a point where I uncovered a great deal of neglect for both past events and making future purposeful plans.

Where if 100% of time is spent on the past, nothing will ever get done, other than a clear understanding of your mistakes;

Similarly 100% on the future will lead to a lot of dreaming and no progress towards the future.

Equally if all your time is spent running very fast in the present-doing-state you might be going very fast; but by the nature of not knowing where you are going in the future; you might be in a state of not-even-wrong, and not know.

10/10/80? 20/20/60? 25/25/50? 10/20/70?

I am looking for suggestions as to an estimate of how to spend each 168 hour week that might prove a fruitful division of time, or a method or reason for a certain division (at least before I go all empirical trial-and-error on this puzzle).

I would be happy with recommended reading on the topic if that can be provided.

Have you ever personally tackled the buckets? Did you come up with a strategy for how to decide between them?

This is the first in a series of posts I am putting together on a personal blog I just started two days ago as a collection of my musings on astrobiology ("The Great A'Tuin" - sorry, I couldn't help it), and will be reposting here. Much has been written here about the Fermi paradox and the 'great filter'. It seems to me that going back to a somewhat more basic level of astronomy and astrobiology is extremely informative to these questions, and so this is what I will be doing. The bloggery is intended for a slightly more general audience than this site (hence much of the content of the introduction) but I think it will be of interest. Many of the points I will be making are ones I have touched on in previous comments here, but hope to explore in more detail.

This post is a combined version of my first two posts - an introduction, and a discussion of our apparent position in space and time in the universe. The blog posts may be found at:

This blog is to be a repository for the thoughts and analysis I've accrued over the years on the topic of astrobiology, and the place of life and intelligence in the universe. All my life I've been pulled to the very large and the very small. Life has always struck me as the single most interesting thing on Earth, with its incredibly fine structure and vast, amazing history and fantastic abilities. At the same time, the vast majority of what exists is NOT on Earth. Going up in size from human-scale by the same number of orders of magnitude as you go down through to get to a hydrogen atom, you get just about to Venus at its closest approach to Earth - or one billionth the distance to the nearest star. The large is much larger than the small is small. On top of this, we now know that the universe as we know it is much older than life on Earth. And we know so little of the vast majority of the universe.

There's a strong tendency towards specialization in the sciences. These days, there pretty much has to be for anybody to get anywhere. Much of the great foundational work of physics was done on tabletops, and the law of gravitation was derived from data on the motions of the planets taken without the benefit of so much as a telescope. All the low-hanging fruit has been picked. To continue to further knowledge of the universe, huge instruments and vast energies are put to bear in astronomy and physics. Biology is arguably a bit different, but the very complexity that makes living systems so successful and so fascinating to study means that there is so much to study that any one person is often only looking at a very small problem.

This has distinct drawbacks. The universe does not care for our abstract labels of fields and disciplines - it simply is, at all scales simultaneously at all times and in all places. When people focus narrowly on their subject of interest, it can prevent them from realizing the implications of their findings on problems usually considered a different field.

It is one of my hopes to try to bridge some gaps between biology and astronomy here. I very nearly double-majored in biology and astronomy in college; the only thing that prevented this (leading to an astronomy minor) was a bad attitude towards calculus. As is, I am a graduate student studying basic cell biology at a major research university, who nonetheless keeps in touch with a number of astronomer friends and keeps up with the field as much as possible. I quite often find that what I hear and read about has strong implications for questions of life elsewhere in the universe, but see so few of these implications actually get publicly discussed. All kinds of information shedding light on our position in space and time, the origins of life, the habitability of large chunks of the universe, the course that biospheres take, and the possible trajectories of intelligences seem to me to be out there unremarked.

It is another of my hopes to try, as much as is humanly possible, to take a step back from the usual narratives about extraterrestrial life and instead focus from something closer to first principles. What we actually have observed and have not, what we can observe and what we cannot, and what this leaves open, likely, or unlikely. In my study of the history of the ideas of extraterrestrial life and extraterrestrial intelligence, all too often these take a back seat to popular narratives of the day. In the 16th century the notion that the Earth moved in a similar way to the planets gained currency and lead to the suppositions that they might be made of similar stuff and that the planets might even be inhabited. The hot question was, of course, if their inhabitants would be Christians and their relationship with God given the anthropocentric biblical creation stories. In the late 19th and early 20th century, Lowell's illusory canals on Mars were advanced as evidence for a Martian socialist utopia. In the 1970s, Carl Sagan waxed philosophical on the notion that contacting old civilizations might teach us how to save ourselves from nuclear warfare. Today, many people focus on the Fermi paradox - the apparent contradiction that since much of the universe is quite old, extraterrestrials experiencing continuing technological progress and growth should have colonized and remade it in their image long ago and yet we see no evidence of this. I move that all of these notions have a similar root - inflating the hot concerns and topics of the day to cosmic significance and letting them obscure the actual, scientific questions that can be asked and answered.

Life and intelligence in the universe is a topic worth careful consideration, from as many angles as possible. Let's get started.

Space and Time

Those of an anthropic bent have often made much of the fact that we are only 13.7 billion years into what is apparently an open-ended universe that will expand at an accelerating rate forever.The era of the stars will last a trillion years; why do we find ourselves at this early date if we assume we are a ‘typical’ example of an intelligent observer?In particular, this has lent support to lines of argument that perhaps the answer to the ‘great silence’ and lack of astronomical evidence for intelligence or its products in the universe is that we are simply the first.This notion requires, however, that we are actually early in the universe when it comes to the origin of biospheres and by extension intelligent systems.It has become clear recently that this is not the case.

The clearest research I can find illustrating this is the work of Sobral et al, illustrated here http://arxiv.org/abs/1202.3436 via a paper on arxivand here http://www.sciencedaily.com/releases/2012/11/121106114141.htm via a summary article.To simplify what was done, these scientists performed a survey of a large fraction of the sky looking for the emission lines put out by emission nebulae, clouds of gas which glow like neon lights excited by the ultraviolet light of huge, short-lived stars.The amount of line emission from a galaxy is thus a rough proxy for the rate of star formation – the greater the rate of star formation, the larger the number of large stars exciting interstellar gas into emission nebulae.The authors use redshift of the known hydrogen emission lines to determine the distance to each instance of emission, and performed corrections to deal with the known expansion rate of the universe.The results were striking.Per unit mass of the universe, the current rate of star formation is less than 1/30 of the peak rate they measured 11 gigayears ago.It has been constantly declining over the history of the universe at a precipitous rate.Indeed, their preferred model to which they fit the trend converges towards a finite quantity of stars formed as you integrate total star formation into the future to infinity, with the total number of stars that will ever be born only being 5% larger than the number of stars that have been born at this time.

In summary, 95% of all stars that will ever exist, already exist.The smallest longest-lived stars will shine for a trillion years, but for most of their history almost no new stars will have formed.

At first this seems to reverse the initial conclusion that we came early, suggesting we are instead latecomers.This is not true, however, when you consider where and when stars of different types can form and the fact that different galaxies have very different histories.Most galaxies formed via gravitational collapse from cool gas clouds and smaller precursor galaxies quite a long time ago, with a wide variety of properties.Dwarf galaxies have low masses, and their early bursts of star formation lead to energetic stars with strong stellar winds and lots of ultraviolet light which eventually go supernova.Their energetic lives and even more energetic deaths appear to usually blast star-forming gases out of their galaxies’ weak gravity or render it too hot to re-collapse into new star-forming regions, quashing their star formation early.Giant elliptical galaxies, containing many trillions of stars apiece and dominating the cores of galactic clusters, have ample gravity but form with nearly no angular momentum.As such, most of their cool gas falls straight into their centers, producing an enormous burst of low-heavy-element star formation that uses most of the gas.The remaining gas is again either blasted into intergalactic space or rendered too hot to recollapse and accrete by a combination of the action of energetic young stars and the infall of gas onto the central black hole producing incredibly energetic outbursts.(It should be noted that a full 90% of the non-dark-matter mass of the universe appears to be in the form of very thin X-ray-hot plasma clouds surrounding large galaxy clusters, unlikely to condense to the point of star formation via understood processes.)Thus, most dwarf galaxies and giant elliptical galaxies contributed to the early star formation of the universe but are producing few or no stars today, have very low levels of heavy element rich stars, and are unlikely to make many more going into the future.

Spiral galaxies are different.Their distinguishing feature is the way they accreted – namely with a large amount of angular momentum.This allows large amounts of their cool gas to remain spread out away from their centers.This moderates the rate of star formation, preventing the huge pulses of star formation and black hole activation that exhausts star-forming gas and prevents gas inflow in giant ellipticals.At the same time, their greater mass than dwarf galaxies ensures that the modest rate of star formation they do undergo does not blast nearly as much matter out of their gravitational pull.Some does leave over time, and their rate of inflow of fresh cool gas does apparently decrease over time – there are spiral galaxies that do seem to have shut down star formation.But on the whole a spiral is a place that maintains a modest rate of star formation for gigayears, while heavy elements get more and more enriched over time.These galaxies thus dominate the star production in the later eras of the universe, and dominate the population of stars produced with large amounts of heavy elements needed to produce planets like ours.They do settle down slowly over time, and eventually all spirals will either run out of gas or merge with each other to form giant ellipticals, but for a long time they remain a class apart.

Considering this, we’re just about where we would expect a planet like ours (and thus a biosphere-as-we-know-it) to exist in space and on a coarse scale in time.Let’s look closer at our galaxy now.Our galaxy is generally agreed to be about 12 billion years old based on the ages of globular clusters, with a few interloper stars here and there that are older and would’ve come from an era before the galaxy was one coherent object.It will continue forming stars for about another 5 gigayears, at which point it will undergo a merger with the Andromeda galaxy, the nearest large spiral galaxy.This merger will most likely put an end to star formation in the combined resultant galaxy, which will probably wind up as a large elliptical after one final exuberant starburst.Our solar system formed about 4.5 gigayears ago, putting its formation pretty much halfway along the productive lifetime of the galaxy (and probably something like 2/3 of the way along its complement of stars produced, since spirals DO settle down with age, though more of its later stars will be metal-rich).

On a stellar and planetary scale, we once again find ourselves where and when we would expect your average complex biosphere to be.Large stars die fast – star brightness goes up with the 3.5th power of star mass, and thus star lifetime goes down with the 2.5th power of mass.A 2 solar mass star would be 11 times as bright as the sun and only live about 2 billion years – a time along the evolution of life on Earth before photosynthesis had managed to oxygenate the air and in which the majority of life on earth (but not all – see an upcoming post) could be described as “algae”.Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet.

All stars also slowly brighten as they age – the sun is currently about 30% brighter than it was when it formed, and it will wind up about twice as bright as its initial value just before it becomes a red giant.Depending on whose models of climate sensitivity you use, the Earth’s biosphere probably has somewhere between 250 million years and 2 billion years before the oceans boil and we become a second Venus.Thus, we find ourselves in the latter third-to-twentieth of the history of Earth’s biosphere (consistent with complex life taking time to evolve).

Together, all this puts our solar system – and by extension our biosphere – pretty much right where we would expect to find it in space, and right in the middle of where one would expect to find it in time.Once again, as observers we are not special.We do not find ourselves in the unexpectedly early universe, ruling out one explanation for the Fermi paradox sometimes put forward – that we do not see evidence for intelligence in the universe because we simply find ourselves as the first intelligent system to evolve.This would be tenable if there was reason to think that we were right at the beginning of the time in which star systems in stable galaxies with lots of heavy elements could have birthed complex biospheres.Instead we are utterly average, implying that the lack of obvious intelligence in the universe must be resolved either via the genesis of intelligent systems being exceedingly rare or intelligent systems simply not spreading through the universe or becoming astronomically visible for one reason or another.

In my next post, I will look at the history of life on Earth, the distinction between simple and complex biospheres, and the evidence for or against other biospheres elsewhere in our own solar system.

The Fermi Paradox leads us to conclude that either A) intelligent life is extremely improbable, B) intelligent life very rarely grows to a higher-level civilization, or C) that higher-level civilizations are common, but are not easy to spot. But each of these explanations are hard to believe. It is hard to believe that intelligent life is rare, given that hominids evolved intelligence so quickly. It is hard to believe that intelligence is inherently self-destructive, since as soon as an intelligent species gains the ability to colonize distant planets, it becomes increasingly unlikely that the entire species could be wiped out; meanwhile, it appears that our own species is on the verge of attaining this potential. It is hard to believe C, since natural selection favors expansionism, so if even a tiny fraction of higher-level civilizations value expansion, then that civilization becomes extremely visible to observers due to its exponential rate of expansion. Not to mention that our own system should have already been colonized by now.

Here I present a new explanation on why higher-level civilizations might be common, and yet still undetected. The key assumption is the existence of a type of Matrioshka brain which I call a "Catastrophe Engine." I cannot even speculate on the exotic physics which might give rise to such a design. However, the defining characteristics of a Catastrophe Engine are as follows:

The Catastrophe Engine is orders or magnitude more computationally powerful than any Matrioshka Brain possible by conventional physics.

The Catastrophe Engine has a fixed probability 1-e-λt of "meltdown" in any interval of t seconds. In other words, the lifetime of a Catastrophe Engine is an exponentially distributed random variable with a mean lifetime of 1/λ seconds.

When the Catastrophe Engine suffers a meltdown, it has a destructive effect of radius r, which, among other things, results in the destruction of all other Catastrophe Engines within the radius, and furthermore renders it permanently impossible to rebuild Engines within the radius.

A civilization using Catastrophe Engines would be incentivized to construct the Engines far apart from each other, hence explaining why such we have never detected such a civilization. Some simple math shows why this would be the case.

Consider a large spherical volume of space. A civilization places a number of Catastrophe Engines in the volume: suppose the Engines are placed in a density so that each Engine is within a radius r of n other such Engines. The civilization seeks to maximize the total computational lifetime of the collection of Engines.

The probability that any given Engine will be destroyed by itself or its neighbors in any given interval of t seconds is 1-e-nλt.

The expected lifetime of an Engine is therefore T = 1/(n λ).

The total computational lifetime of the system is proportional to nT = n/(n λ) = 1/λ.

Hence, there is no incentive for the civilization to build Catastrophe Engines to a density n greater than 1. If the civilization gains extra utility from long computational lifetimes, as we could easily imagine, then the civilization is in fact incentivized to keep the Catastrophe Engines from getting too close.

Now suppose the radius r is extremely huge, i.e. on the order of intergalatic distances. Then the closest Catastrophe Engine is likely on the order of r distance from ourselves, and may be quite difficult to spot even if it is highly visible.

On the other hand, the larger the radius of destruction r, the more likely it is that we would be able to observe the effects of a meltdown given that it occurs within our visible universe. But since a larger radius also implies a smaller number of Catastrophe Engines, a sufficiently large radius (and long expected lifetime) makes it more likely that a meltdown has simply not yet occurred in our visible universe.

The existence of Catastrophe Engines alone does not explain the Fermi Paradox. We also have to rule out the possibility that a civilization with Catastrophe Engines will still litter the universe with visible artifacts, or that highly visible expansionist civilizations which have not yet developed Catastrophe Engines would coexist with invisible civilizations using Catastrophe Engines. But there are many ways to fill in these gaps. Catastrophe Engines might be so potent that a civilization ceases to bother with any other kinds of possibly visible projects other than construction of additional Catastrophe Engines. Furthermore, it could be possible that civilizations using Catastrophe Engines actively neutralize other spacefaring civilizations, fearing disruption to the Catastrophe Engines. Or that Catastrophe Engines are rapidly discovered: their principles become known to most civilizations before those civilizations have become highly visible.

The Catastrophe Engine is by no means a conservative explanation of the Fermi Paradox, since only the very most speculative principles of physics could possibly explain how an object of such destructive power could be constructed. Nevertheless, it is one explanation of how higher civilizations might be hard to detect as a consequence of purely economical motivations.

Supposing this is a correct explanation of the Fermi paradox, does it result in a desirable outcome for the long-term future of the human race? Perhaps not, since it necessarily implies the existence of a destructive technology that could damage a distant civilization. Any civilization lying close enough to be affected by our civilization would be incentivized to neutralize us before we gain this technology. On the other hand, if we could gain the technology before being detected, then mutually assured destruction could give us a bargaining chip, say, to be granted virtual tenancy in one of their Matrioshka Brains.

Our summer fundraiser is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. Previous posts in the series are listed at the above link.

I'm often asked whether donations to MIRI now are more important than donations later. Allow me to deliver an emphatic yes: I currently expect that donations to MIRI today are worth much more than donations to MIRI in five years. As things stand, I would very likely take $10M today over $20M in five years.

That's a bold statement, and there are a few different reasons for this. First and foremost, there is a decent chance that some very big funders will start entering the AI alignment field over the course of the next five years. It looks like the NSF may start to fund AI safety research, and Stuart Russell has already received some money from DARPA to work on value alignment. It's quite possible that in a few years' time significant public funding will be flowing into this field.

(It's also quite possible that it won't, or that the funding will go to all the wrong places, as was the case with funding for nanotechnology. But if I had to bet, I would bet that it's going to be much easier to find funding for AI alignment research in five years' time).

In other words, the funding bottleneck is loosening — but it isn't loose yet.

We don't presently have the funding to grow as fast as we could over the coming months, or to run all the important research programs we have planned. At our current funding level, the research team can grow at a steady pace — but we could get much more done over the course of the next few years if we had the money to grow as fast as is healthy.

Which brings me to the second reason why funding now is probably much more important than funding later: because growth now is much more valuable than growth later.

There's an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build beneficial intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community's response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field's future direction.

People at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are more vague and less well-understood.

It's likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.

The alignment research program within AI is just now getting started in earnest, and it may even be funding-saturated in a few years' time. But it's nowhere near funding-saturated today, and waiting five or ten years to begin seriously ramping up our growth would likely give us far fewer opportunities to shape the methodology and research agenda within this new AI paradigm. The projects MIRI takes on today can make a big difference years down the line, and supporting us today will drastically affect how much we can do quickly. Now matters.

Programmers do something called Test Driven Development. Basically, they write tests that say "I expect my code to do this", then write more code, and if the subsequent code they write breaks a test they wrote, they'll be notified.

Wouldn't it be cool if there was Test Driven Thinking?

Write tests: "I expect that this is true."

Think: "I claim that A is true. I claim that B is true."

If A or B causes any of your tests to fail, you'd be notified.

I don't know where to run with this though. Maybe someone else will be able to take this idea further. My thoughts:

It'd be awesome if you could apply TDT and be notified when your tests fail, but this seems very difficult to implement.

I'm not sure what a lesser but still useful version would look like.

Maybe this idea could serve as some sort of intuition pump for intellectual hygiene ("What do you think you know, and why do you think you know it?"). Ie. having understood the idea of TDT, maybe it'd motivate/help people apply intellectual hygiene. Which is sort of like a manual version of TDT, where you're the one constantly running the tests.

(This post is not an attempt to convey anything new, but is instead an attempt to convey the concept of Bayesian reasoning as simply as possible. There have been other elementary posts that have covered how to use Bayes’ theorem: here, here, here and here)

Bayes’ theorem is about the probability that something is true given some piece or pieces of evidence. In a really simple form it is basically the equation below:

This will be explained using the following coin flipping scenario:

If someone is flipping two coins: one fair and one biased (has heads on both sides), then what is the probability that the coin flipped was the fair coin given that you know that the result of the coin being flipped was heads?

Let’s figure this out by listing out the potential states using a decision tree:

We know that the tail state is not true because the result of the coin being flipped was heads. So, let’s update the decision tree:

The decision tree now lists all of the possible states given that the result was heads.

Let's now plug in the values into the formula. We know that there are three potential states. One in which the coin is fair and two in which it is biased. Let's assume that each state has the same likelihood.

So, the result is: 1 / 1 + 2 which is 1 / 3 which equals 33%.Using the formula we have found out that there is 33% chance that the coin flipped was the fair one when we already know that the result of the flip was heads.

At this point you you may be thinking what any of this has to do with bayesian reasoning. Well, the relation is that the above formula is pretty much the same as Bayes’ theorem which in its explicit form is:

You can see that P(B|A) * P(A) (in bold) is on both the top and the bottom of the equation. It represents “expected number of times it’s true” in the generic formula above. P(B|~A) * P(~A) represents "expected number of times it's false".

You don’t need to worry about what the whole formula means yet as this post is just about how to use Bayesian reasoning and why it is useful. If you want to find out how to deduce Bayes' theorem, check out this post. If you want some examples of how to use Bayes' theorem see one of these posts: 1, 2, 3 and 4.

Let’s now continue on. This time we will be going through a totally different example. This example will demonstrate what it is like to use Bayesian reasoning.

Imagine a scenario with a teacher and a normally diligent student. The student tells the teacher that they have not completed their homework because their dog ate it. Take note of the following:

H stands for the hypothesis which is that the student did their home work. This is possible, but the teacher does not think that it is very likely. The teacher only has the evidence of the student’s diligence to back up this hypothesis which does affect the probability that the hypothesis is correct, but not by much.

~H stands for the opposite hypothesis which is that the student did not do their homework. The teacher thinks that this is likely and also believes that the evidence (no extra evidence backing up the students claim and a cliché excuse) points towards this opposite hypothesis.

Which do you think is more probable: H or ~H? If you look at how typical ~H is and how likely the evidence is if ~H is correct, then I believe that we must see ~H (which stands for the student did not do their homework) as more probable. The below picture demonstrates this. Please note that higher probability is represented as being heavier i.e. lower in the weight-scale pictures below.

The teacher is using Bayesian reasoning, so they don’t actually take ~H (student did not do their homework) as being true. They take it as being probable given the available evidence. The teacher knows that if new evidence is provided then this could make the H more probable and ~H less probable. So, knowing this the teacher tells the student that if they bring in their completed homework tomorrow and provide some new evidence then they will not get a detention tomorrow.

Let’s assume that the next day the student does bring in their completed homework and they also bring in the remains of the original homework that looks like it has been eaten by a dog. Now, the teacher, since they have received new evidence, must update the probabilities of the hypotheses. The teacher also remembers the original evidence (the student’s diligence). When the teacher updates the probabilities of the hypotheses, H (student did their homework) becomes more probable and ~H (student did not do their homework) becomes less probable, but note that it is not considered impossible. After updating the probabilities of the hypotheses the teacher decides to let the student out of the detention. This is because the teacher now sees H as being the best hypothesis that is able to explain the evidence.

The below picture demonstrates the updated probabilities.

If your reasoning is similar to the teachers, then congratulations. Because this means that you are using Bayesian reasoning. Bayesian reasoning involves incorporating conditional probabilities and updating these probabilities when new evidence is provided.

You may be looking at this and wondering what all the fuss is over Bayes’ Theorem. You might be asking yourself: why do people think this is so important? Well, it is true that the actual process of weighing evidence and changing beliefs is not a new practice, but the importance of the theorem does not actually come from the process, but from the fact that this process has been quantified, i,e, made it into an expressible equation (Bayes’ Theorem).

Overall, the theorem and its related reasoning are useful because they take into account alternative explanations and how likely they are given the evidence that you are seeing. This means that you can’t just get a theory and take it to be true if it fits the evidence. You need to also look at alternative hypotheses and see if they explain the evidence better. This leads you to start thinking about all hypotheses in terms of probabilities rather than certainties. It also leads you to think about beliefs in terms of evidence. If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence. A corollary of this is that different evidence leads to different probabilities.

(This post is not an attempt to convey anything new, but is instead just an attempt to provide background context on how Bayes' theorem works by describing how it can be deduced. This is not meant to be a formal proof. There have been other elementary posts that have covered how to use Bayes’ theorem:here,here,hereandhere)

Consider the following example

Imagine that your friend has a bowl that contains cookies in two varieties: chocolate chip and white chip macadamia nut. You think to yourself: “Yum. I would really like a chocolate chip cookie”. So you reach for one, but before you can pull one out your friend lets you know that you can only pick one, that you cannot look into the bowl and that all the cookies are either fresh or stale. Your friend also tells you that there are 80 fresh cookies, 40 chocolate chip cookies, 15 stale white chip macadamia nut cookies and 100 cookies in total. What is the probability that you will pull out a fresh chocolate chip cookie?

To figure this out we will create a truth table. If we fill in the values that we do know, then we will end up with the below table. I have highlighted in yellow the cell that we want to find the value of.

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

80

Stale

15

Total

40

100

If we look at the above table we can notice that, like in Sudoku, there are some values that we can fill in based on the information that we already know. These values are coloured in grey and they are:

The number of stale cookies. We know that 80 cookies are fresh and that there are 100 cookies in total, so this means that there must be 20 stale cookies.

The number of white chip macadamia nut cookies. We know that there are 40 chocolate chip cookies and 100 cookies in total, so this means that there must be 60 white chip macadamia nut cookies

If we fill in both these values we end up with the below table:

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

80

Stale

15

20

Total

40

60

100

If we look at the table now, we can see that there are two more values that can be filled in. These values are coloured in grey and they are:

The number of fresh white chip macadamia nut cookies. We know that there are 60 white chip macadamia nut cookies and that 15 of these are stale, so this means that there must be 45 fresh white chip macadamia nut cookies.

The number of stale chocolate chip cookies. We know that there are 20 stale cookies and that 15 of these are white chip macadamia nut, so this means that there must be 5 stale chocolate chip cookies.

If we fill in both these values we end up with the below table:

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

45

80

Stale

5

15

20

Total

40

60

100

We can now find out the number of fresh chocolate chip cookies. It is important to note that there are two ways in which we can do this. These two ways are called the inverse of each other (this will be used later):

Using the filled in row values. We know that there are 80 fresh cookies and that 45 of these are white chip macadamia nut, so this means that there must be 35 fresh chocolate chip cookies.

Using the filled in column values. We know that there are 40 chocolate chip cookies and the 5 of these are stale, so this means that there must be 35 fresh chocolate chip cookies.

If we fill in the last value in the table we end up with the below table:

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

We can now find out the probability of choosing a fresh chocolate chip cookie by dividing the number of fresh chocolate chip cookies (35) by the total number of cookies (100). This is 35 / 100 which is 35%. We now have the probability of choosing a fresh chocolate chip cookie (35%).

To get to the Bayes' theorem I will need to reduce the terms to a simpler form.

P(A) = probability of finding some observation A. You can think of this as the probability of the picked cookie being chocolate chip.

P(B) = the probability of finding some observation B. You can think of this as the probability of the picked cookie being fresh. Please note that A is what we want to find given B. If it was desired, then A could be fresh and B chocolate chip.

P(~A) = negated version of finding some observation A. You can think of this as the probability of the picked cookie not being chocolate i.e. being a white chip macadamia nut instead.

P(~B) = a negated version of finding some observation B. You can think of this as the probability of the picked cookie not being fresh i.e. being stale instead.

P(A∩B) = probability of being both A and B. You can think of this as the probability of the picked cookie being fresh and chocolate chip.

Now, we will start getting a bit more complicated as we start moving into the basis of the Bayes’ Theorem. Let’s go through another example based on the original.

Let’s assume that before you pull out a cookie you notice that it is fresh. Can you then figure out the likelihood of it being chocolate chip before you pull it out? The answer is yes.

We will find this out using the table that we filled in previously. The important row is underlined.

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

Since we already know that the cookie is fresh, we can say that the likelihood of it being a chocolate chip cookie is equal to the number of fresh chocolate chip cookies (35) divided by the total number of fresh cookies (80). This is 35 / 80 which is 43.75%.

In a simpler form this is:

P(A|B) - The probability of A given B. You can think of this as the probability of the picked cookie being chocolate chip if you already know that it is fresh.

If we relook at the table we can see that there is some extra important information that we can find out about P(A|B). We can discover that it is equal to P(A∩B) / P(B) You can think of this as the probability of the picked cookie being chocolate chip if you know that it is fresh (35 / 80) is equal to the probability of the picked cookie being fresh and chocolate chip (35 / 100) divided by the probability of it being fresh (80 / 100). This is P(A|B) = (35 / 100) / (80 / 100) which becomes 0.35 / 0.8 which is the same as the answer we found out above (43.75%). Take note of the fact that P(A|B) = P(A∩B) / P(B) as we will use this later.

Let’s now return to the inverse idea that was raised previously. If we want to know the probability of the picked cookie being fresh and chocolate chip, i.e. P(A∩B), then we can use the underlined parts of the filled in truth table.

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

If we know that the cookie is known to be fresh like in the top row above, then we can find out that: P(A∩B) = P(A|B) * P(B). This means that the probability of the picked cookie being fresh and chocolate chip (35 / 100) (remember that there were 100 cookies in total) is equal to the probability of it being chocolate chip given that you know that it is fresh (35 / 80) times the probability of it being fresh (80 / 100) . So, we end up with P(A∩B) = (35 / 80) * (80 / 100) which becomes 35% which is the same as 35 / 100 which we know is the right answer.

Alternatively, since we know that we can convert P(A|B) to P(A∩B) / P(B) (we found this out previously) we can also find out that:P(A∩B) = P(A|B) * P(B). We can do this by using the following method:

Notice that P(B) is on both the top and bottom of the equation, which means that it can be crossed out

Cross out P(B) to give you P(A∩B) = P(A∩B)

The inverse situation is when you know that the cookie is chocolate chip like in the left column in the table above. Using the left column we can find out that: P(A∩B) = P (B|A) * P(A). This means that the probability of the picked cookie being fresh and chocolate chip (35 / 100) is equal to the probability of it being fresh given that you know it is chocolate chip (35 / 40) times the probability of it being chocolate chip (40 / 100). This is: P(A∩B) = (35 / 40) * (40 / 100). This becomes 35% which we know is the right answer.

Now, we have enough information to deduce the simple form of Bayes’ Theorem.

Let’s first recount what we know:

P(A|B) = P(A∩B) / P(B)

P(A∩B) = P(B|A) * P(A)

By taking the first fact: P(A|B) = P(A∩B) / P(B) and using the second fact to convert P(A∩B) to P(B|A) * P(A) you end up with P(A|B) = (P(B|A) * P(A)) / P(B) which is Bayes' Theorem in its simple form.

From the simple form of Bayes' Theorem, there is one more conversion that we need to make to derive the explicit form of Bayes' Theorem which is the one we are trying to deduce.

To get to the explicit form version we need to first find out that P(B) = P(A) * P(B|A) + P(~A) * P(B|~A).

To do this let’s refer to the table again:

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

We can see that the probability that the picked cookie is fresh (80 / 100) is equal to the probability that it is fresh and chocolate chip (35 / 100) plus the probability that it is fresh and white chip macadamia nut (45 / 100). So, we can find out that the probability of P(B) (cookie is fresh) is equal to 35 / 100 + 45 / 100 which is 0.8 or 80% which we know is the answer. This gives the formula:P(B) = P(A∩B) + P(~A∩B)

We know that P(A∩B) = P(B|A) * P(A) as we found this out earlier. Similarly we can find out that P(~A∩B) = P(~A) * P(B|~A). This means that the probability of the picked cookie being fresh and white chip macadamia nut (45 / 100) is equal to the probability of it being white chip macadamia nut (60 / 100) times the probability of it being fresh cookie given that you know that it is white chip macadamia nut (45 / 60). This is: (60 / 100) * (45 / 60)which is 45% which we know is the answer.

Using this information, we can now get to the explicit form of Bayes' Theorem:

At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.

This is just a short note to point out that AIs can self-improve without having to self-modify. So locking down an agent from self-modification is not an effective safety measure.

How could AIs do that? The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas).

Or it the AI remains unchanged and in charge, it could change the whole process around itself, so that the whole process changes and improves. For instance, if the AI is inconsistent and has to pay more attention to problems that are brought to its attention than problems that aren't, it can start to act to manage the news (or the news-bearers) to hear more of what it wants. If it can't experiment on humans, it will give advice that will cause more "natural experiments", and so on. It will gradually try to reform its environment to get around its programmed limitations.

Anyway, that was nothing new or deep, just a reminder point I hadn't seen written out.

So I'm working for a friend's company at the moment (friend is a small business owner who designs websites and a bit of an entrepreneur) anyway, I've persuaded him that we should research the empirical literature on what makes websites effective (which we've done a lot of now) and to advertise ourselves as being special by reason of doing this (which we're only just starting to do).

One thing that I found absolutely remarkable is how unfilled this space tends to be. Like a lot of things in the broad area of empirical aesthetics it seems like there are a lot of potentially useful results (c.f.http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485842/ ), but they're simply not being applied- either as points of real practice or of marketing differentiation.

It seems that if we can ever define the difference between human beliefs and values, we could program a safe Oracle by requiring it to maximise the accuracy of human beliefs on a question, while keeping human values fixed (or very little changing). Plus a whole load of other constraints, as usual, but that might work for a boxed Oracle answering a single question.

This is a reason to suspect it will not be easy to distinguish human beliefs and values ^_^

I believe there is some truth in William James' conclusion that "compared with what we ought to be, we are only half awake." (James, 1907). So what can we do to awaken our slumbering potentials? I am especially interested in our potential for cognitive growth, that is learning to think, learn, and decide better. Early in life we learn amazingly fast, but as we transition into adulthood our cognitive development plateaus, and most of us get stuck in suboptimal mental habits and never realize our full potential. I think this is very sad, and I wish we could find a way to accelerate our cognitive growth. Yesterday, I organized a discussion on this very topic at the Consciousness Hacking Meetup, and it inspired me to propose the following eight steps as a starting point for our personal and scientific exploration of interventions to promote cognitive growth:

1.Tap into your intrinsic motivation by mental contrasting: Who do you want to become and why? Imagine your best possible self and how wonderful it will be to become that person. Imagine how wonderful it will be to have perfected the skill you seek to develop and how it will benefit you. Next, contrast the ideal future self you just imagines with who you are right now and be brutally honest with yourself. Realizing the discrepancy between who you are and who you want to be is a powerful motivator (Oettingen, et al., 2009). Finally, make yourself aware that you and the world around you will benefit from any progress that you make on yourself for a very, very long time. A few hours of hard work per week is a small price to pay for the sustained benefits of being a better person and feeling better about yourself for the rest of your life.

2.Become more self-aware: Introspect, observe yourself, and ask your friends to develop an accurate understanding and acceptance of how you currently fare in the skill you want to improve and why. What do you do in situations that require the skill? How well does it work? How do you feel? Have you tried doing it differently? Are you currently improving? Why or why not?

3.Develop a growth mindset (Dweck, 2006): Convince yourself that you will learn virtually any cognitive skill if you invest the necessary hard work. Even talent is just a matter of training. Each failure is a learning opportunity and so are your little successes along the way.

4.Understand the skill and how it is learned: What do masters of this skill do? How does it work? How did they develop the skill? What are the intermediate stages? How can the skill be learned and practiced? Are there any exercises, tutorials, tools, books, or courses for acquiring the skill you want to improve on?

5.Create a growth structure for yourself:

a.Set SMART self-improvement goals (Doran, 1981). The first three steps give you a destination (i.e. a better version of yourself), a starting point (i.e. the awareness of your strengths and weaknesses), and a road map (i.e. how to practice). Now it is time to plan your journey. Which path do you want to take from who you are right now to who you want to be in the future? A good way to delineate your path might be to place a number of milestones and decide by when you want to have reached each of them. Milestones are specific, measurable goals that lie between where you are now and where you want to be. Starting with the first milestone, you can choose a series of steps and decide when to take each step. It helps to set concrete goals at the beginning of every day. To set good milestones and choose appropriate steps, you can ask yourself the following questions: What exactly do I want to learn? How will I know that I have learned it? What will I do to develop that skill? By which time do I want to have learned it?

b.Translate your goals into implementation intentions. An implementation intention is a simple IF-THEN plan. It specifies a concrete situation in which you will take action (IF) and what exactly you will do (THEN). Committing to an implementation intention will make you much more likely to seize opportunities to make progress towards your goals and eventually achieve them (Gollwitzer, 1999).

c.You can restructure your physical environment to make your goals and your progress more salient. To make your goals more salient you can write them down and post them on your desktop, in your office, and in your apartment. To make your progress more salient, make todo lists and celebrate checking off every subtask that you have completed. Give yourself points for every task you completed and compute your daily score, e.g. the percentage of daily goals that you have accomplished. Celebrate these small moments of victory! Post your path and score board in a visible manner.

d.Restructure your social environment to make it more conducive to growth. You can share your self-improvement goals with a friend or mentor who helps you understand where you are at, encourages you to grow, and will hold you accountable for following through with your plan. Friends can make suggestions for what to try and give you feedback on how you are doing. They can also help you notice, appreciate and celebrate your progress. Identify social interactions that help you grow and seek them out more while changing or avoiding social interactions that hinder your growth.

e.There are many things that you can do to also restructure your own mind for growth as well: There are at least three kinds of things you can do. First, you can be more mindful of what you do, how well it works, and why. Mindful learning is much more effective than mindless learning. Second, you an pay more attention to the moments when you do well at what you want to improve. Let yourself appreciate these small (or large) successes more—give yourself a compliment for getting better, smile, and give yourself a mental or physical pat and the shoulder. Attend specifically to your improvement. To do so, ask yourself, if you are getting better rather than how well you did. You can mentally contrast what you did this time to how poorly you used to do when you started working on that skill. Rate your improvement by how many percent better you perform now than you used to. Third, you can be kind to yourself: Don’t beat yourself up for failing and being imperfect. Instead, embrace failure as an opportunity for growth. This is will allow you to continue practicing a skill that you have not mastered yet rather than giving up in frustration.

6.Seek advice, experiment, and get feedback: Accept that you don’t know how to do it yet and adopt a beginner’s mindset. Curious infants learn much more rapidly than seniors who think they know it all. So emulate a curious infant rather than pretending that you know everything already. With this mindset, it will be much easier to seek advice from other people. Experimenting with new ways of doing things is critical, because if you merely repeat what you have done a thousand times the results won’t be dramatically different. Sometimes we are unaware of something large or small that really matters, and it is often hard to notice what you are doing wrong and what you are doing well. This is why it is crucial to get feedback; ideally from somebody who has already mastered the skill you are trying to learn.

7.Practice, practice, practice. Becoming a world-class expert requires 10,000 hours of deliberate practice (Ericsson, Krampe, & Tesch-Romer, 1993). Since you probably don’t need to become the world’s leading expert in the skill you are seeking to develop, fewer hours will be sufficient. But the point is that you will have to practice a lot. You will have to challenge yourself regularly and practicing will be hard. Schedule to practice the skill regularly. Make practicing a habit. Kindly help yourself resume the practice after you have let it slip.

8.Reflect on your progress at a regular basis, perhaps at the end of every day. Ask yourself: What have I learned today/this week/this month? Am I making any progress? What did I do well? What will I do better tomorrow/this week/month.

When I started to work on the map of AI safety solutions, I wanted to illustrate the excellent article “Responses to Catastrophic AGI Risk: A Survey” by Kaj Sotala and Roman V. Yampolskiy, 2013, which I strongly recommend.

However, during the process I had a number of ideas to expand the classification of the proposed ways to create safe AI. In their article there are three main categories: social constraints, external constraints and internal constraints.

I added three more categories: "AI is used to create a safe AI", "Multi-level solutions" and "meta-level", which describes the general requirements for any AI safety theory.

In addition, I divided the solutions into simple and complex. Simple are the ones whose recipe we know today. For example: “do not create any AI”. Most of these solutions are weak, but they are easy to implement.

Complex solutions require extensive research and the creation of complex mathematical models for their implementation, and could potentially be much stronger. But the odds are less that there will be time to realize them and implement successfully.

After aforementioned article several new ideas about AI safety appeared.

These new ideas in the map are based primarily on the works of Ben Goertzel, Stuart Armstrong and Paul Christiano. But probably many more exist and was published but didn’t come to my attention.

Moreover, I have some ideas of my own about how to create a safe AI and I have added them into the map too. Among them I would like to point out the following ideas:

1.Restriction of self-improvement of the AI. Just as a nuclear reactor is controlled by regulation the intensity of the chain reaction, one may try to control AI by limiting its ability to self-improve in various ways.

2.Capture the beginning of dangerous self-improvement. At the start of potentially dangerous AI it has a moment of critical vulnerability, just as a ballistic missile is most vulnerable at the start. Imagine that AI gained an unauthorized malignant goal system and started to strengthen itself. At the beginning of this process, it is still weak, and if it is below the level of human intelligence at this point, it may be still more stupid than the average human even after several cycles of self-empowerment. Let's say it has an IQ of 50 and after self-improvement it rises to 90. At this level it is already committing violations that can be observed from the outside (especially unauthorized self-improving), but does not yet have the ability to hide them. At this point in time, you can turn it off. Alas, this idea would not work in all cases, as some of the objectives may become hazardous gradually as the scale grows (1000 paperclips are safe, one billion are dangerous, 10 power 20 are x-risk). This idea was put forward by Ben Goertzel.

3.AI constitution. First, in order to describe the Friendly AI and human values we can use the existing body of criminal and other laws. (And if we create an AI that does not comply with criminal law, we are committing a crime ourselves.) Second, to describe the rules governing the conduct of AI, we can create a complex set of rules (laws that are much more complex than Asimov’s three laws), which will include everything we want from AI. This set of rules can be checked in advance by specialized AI, which calculates only the way in which the application of these rules can go wrong (something like mathematical proofs on the basis of these rules).

4."Philosophical landmines." In the map of AI failure levels I have listed a number of ways in which high-level AI may halt when faced with intractable mathematical tasks or complex philosophical problems. One may try to fight high-level AI using "landmines", that is, putting it in a situation where it will have to solve some problem, but within this problem is encoded more complex problems, the solving of which will cause it to halt or crash. These problems may include Godelian mathematical problems, nihilistic rejection of any goal system or the inability of AI to prove that it actually exists.

5. Multi-layer protection. The idea here is not that if we apply several methods at the same time, the likelihood of their success will add up, this notion will not work if all methods are weak. The idea is that the methods of protection work together to protect the object from all sides. In a sense, human society works the same way: a child is educated by an example as well as by rules of conduct, then he begins to understand the importance of compliance with these rules, but also at the same time the law, police and neighbours are watching him, so he knows that criminal acts will put him in jail. As a result, lawful behaviour is his goal which he finds rational to obey. This idea can be reflected in the specific architecture of AI, which will have at its core a set of immutable rules, around it will be built human emulation which will make high-level decisions, and complex tasks will be delegated to a narrow Tool AIs. In addition, independent emulation (conscience) will check the ethics of its decisions. Decisions will first be tested in a multi-level virtual reality, and the ability of self-improvement of the whole system will be significantly limited. That is, it will have an IQ of 300, but not a million. This will make it effective in solving aging and global risks, but it will also be predictable and understandable to us. The scope of its jurisdiction should be limited to a few important factors: prevention of global risks, death prevention and the prevention of war and violence. But we should not trust it in such an ethically delicate topic as prevention of suffering, which will be addressed with the help of conventional methods.

This map could be useful for the following applications:

1. As illustrative material in the discussions. Often people find solutions ad hoc, once they learn about the problem of friendly AI or are focused on one of their favourite solutions.

2. As a quick way to check whether a new solution really has been found.

3. As a tool to discover new solutions. Any systematisation creates "free cells" to fill for which one can come up with new solutions. One can also combine existing solutions or be inspired by them.

4. There are several new ideas in the map.

A companion to this map is the map of AI failures levels. In addition, this map is subordinated to the map of global risk prevention methods and corresponds to the block "Creating Friendly AI" Plan A2 within it.

Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.

This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.

To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.

Hello, all. My sibling asked my for advice recently, and I'm making this post on his behalf.

Said sibling is currently currently has one more year to go at MIT before he gets his bachelors degree in Mathematics/CS. He is also enrolled in a 5-year masters program, so he will need one more year after that to finish a Masters, after which he anticipates getting a job somewhere the CS Industry / Finance / Academia. Anyway, he is interested in taking a gap year after finishing his Bachelors to pick up some novel experiences, and trying something different from what he has been doing already and plans to do after graduation.

Right now, he is in the brainstorming stage, and is looking for ideas. Note that he is not opposed to getting a job or something of the like - as long as its a different experience that what he would get working for a large software company, or a hedge fund, or something of the like. Financially, he does need to earn enough to live on (this isn't quite a vacation), but he isn't worried about money aside from that (so the "money" constraint only needs to be satisficed, not optimized.) With that said, what are some things that he might consider doing?

I've been floating this idea around for a while, and there was enough interest to organize it.

Diplomacy is a board game of making and breaking alliances. It is a semi-iterative prisoner's dilemma with 7 prisoners. The rules are very simple, there is no luck factor and any tactical tricks can be learned quickly. You play as one of the great powers in pre-WW1 Europe, and your goal is to dominate over half of the board. To do this, you must negotiate alliances with the other players, and then stab them at the most opportune moment. But beware, if you are too stabby, no one will trust you. And if you are too trusting, you will get stabbed yourself.

If you have never played the game, don't worry. It is really quick to pick up. I explain the rules in detail here.

The game will (most likely) be played at webdiplomacy.net. You need an account, which requires a valid email. To play the game, you will need to spend at least 10 minutes every phase (3 days) to enter your orders. In the meantime, you will be negotiating with other players. That takes as much as you want it to, but I recommend setting away at least 30 minutes per day (in 5-minute quantums). A game usually lasts about 10 in-game years, which comes down to 30-something phases (60-90 days). A phase can progress early if everyone agrees. Likewise, the game can be paused indefinitely if everyone agrees (e.g. if a player will not have Internet access).

Joining a game is Serious Business, as missing a deadline can spoil it for the other 6 players. Please apply iff:

You will be able to access the game for 10 minutes every 3 days (90% certainty required)

If 1) changes, you will be able to let the others know at least 1 day in advance (95% certainty required)

You will be able to spend an average of 30 minutes per day (standard normal distribution)

You will not hold an out-of-game grudge against a player who stabbed you (adjusting for stabbyness in potential future games is okay)

If you still wish to play, please sign up in the comments. Please specify the earliest time it would suit you for the game to start. If we somehow get more than 7 players, we'll discuss our options (play a variant with more players, multiple games, etc).

A few months ago we have launched an experimental website. In brief, our goal is to create a platform where unrestricted freedom of speech would be combined with high quality of discussion. The problem can be approached from two directions. One is to help users navigate through content and quickly locate the higher quality posts. Another, which is the topic of this article, is to help users improve the quality of their own posts by providing them with meaningful feedback.

One important consideration for those who want to write better comments is how much detail to leave out. Our statistical analysis shows that for many users there is a strong connection between the ratings and the size of their comments. For example, for Yvain (Scott Alexander) and Eliezer_Yudkowsky, the average number of upvotes grows almost linearly with increasing comment length.

This trend, however, does not apply to all posters. For example, for the group of top ten contributors (in the last 30 days) to LessWrong, the average number of upvotes increases only slightly with the length of the comment (see the graph below). For quite a few people the change even goes in the opposite direction – longer comments lead to lower ratings.

Naturally, even if your longer comments are rated higher than the short ones, this does not mean that inflating comments would always produce positive results. For most users (including popular writers, such as Yvain and Eliezer), the average number of downvotes increases with increasing comment length. The data also shows that long comments that get most upvotes are generally distinct from long comments that get most downvotes. In other words, long comments are fine as long as they are interesting, but they are penalized more when they are not.

The rating patterns vary significantly from person to person. For some posters, the average number of upvotes remains flat until the comment length reaches some threshold and then starts declining with increasing comment length. For others, the optimal comment length may be somewhere in the middle. (Users who have accounts on both Lesswrong and Omnilibrium can check the optimal length for their own comments on both websites by using this link.)

Obviously length is just one among many factors that affect comment quality and for most users it does not explain more than 20% of variation in their ratings. We have a few other ideas on how to provide people with meaningful feedback on both the style and the content of their posts. But before implementing them, we would like to get your opinions first. Would such feedback be actually useful to you?

I've been often wondering why scientific thinking seems to be so rare. What I mean by this is dividing problems into theory and empiricism, specifying your theory exactly then looking for evidence to either confirm or deny the theory, or finding evidence to later form an exact theory.

This is a bit narrower than the broader scope of rational thinking. A lot of rationality isn't scientific. Scientific methods don't just allow you to get a solution, but also to understand that solution.

For instance, a lot of early Renaissance tradesmen were rational, but not scientific. They knew that a certain set of steps produced iron, but the average blacksmith couldn't tell you anything about chemical processes. They simply did a set of steps and got a result.

Similarly, a lot of modern medicine is rational, but not too scientific. A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is. They may run a test to verify their guess. Their job generally requires a gigantic memory of different diseases, but not too much knowledge of scientific investigation.

What's most damning is that our scientific curriculum in schools don't teach a lot of scientific thinking.

What we get instead is mostly useless facts. We learn what a cell membrane is, or how to balance a chemical equation. Learning about, say, the difference between independent and dependent variables is often left to circumstance. You learn about type I and type II errors when you happen upon a teacher who thinks it's a good time to include that in the curriculum, or you learn it on your own. Some curriculums include a required research methods course, but the availability and quality of this course varies greatly between both disciplines and colleges. Why there isn't a single standardized method of teaching this stuff is beyond me. Even math curriculums are structured around calculus instead of the much more useful statistics and data science placing ridiculous hurdles for the typical non-major that most won't surmount.

It should not be surprising then that so many fail at even basic analysis. I have seen many people make basic errors that they are more than capable of understanding but simply were never taught. People aren't precise with their definitions. They don't outline their relevant variables. They construct far too complex theoretical models without data. They come to conclusions based on small sample sizes. They overweight personal experiences, even those experienced by others, and underweight statistical data. They focus too much on outliers and not enough on averages. Even professors, who do excellent research otherwise, often suddenly stop thinking analytically as soon as they step outside their domain of expertise. And some professors never learn the proper method.

Much of this site focuses on logical consistency and eliminating biases. It often takes this to an extreme; what Yvain refers to as X-Rationality. But eliminating biases barely scratches the surface of what is often necessary to truly understand a problem. This may be why it is said that learning about rationality often reduces rationality. An incomplete, slightly improved, but still quite terrible solution may generate a false sense of certainty. Unbiased analysis won't fix a lousy dataset. And it seems rather backwards to focus on what not to do (biases) rather than what to do (analytic techniques).

True understanding is often extremely hard. Good scientific analysis is hard. It's disappointing that most people don't seem to understand even the basics of science.

Because the number of participants quickly exceeded my expectations I had to scramble to put something together for a larger group. For this I had tactical aid from blob and practical support from colleagues putting everything together from name tags to getting food and drinks and chairs.

We had an easy start with getting to know each other with Fela's Ice-Breaking Game.

The main topics covered were:

An introduction into the topics, goals and methods or LessWrong illustrated with The Parable of the Dagger by pinkgothic. This was followed up by many smaller talks about how to apply rationality.

I answered many questions regarding biases and fallacies and we used the game cards I had prepared early multiple times. These attracted some interest and can be found here (Dropbox).

Beside the main topics there was a good athmosphere with many people having smaller discussions.

The event ended with a short wrap-up based on Irinas Sustainable Change talk from the Berlin event which did prompt some people to take action based on what they heard.

What I learned from the event:

I still tend to do overplanning. Maybe having a plan for eventualities isn't bad but the agenda doesn't need to be as highly structured as I did. It could cause expectations that can't be met.

Apparently I appeared stressed but I didn't feel that way myself. Probably from hurrying around. I wonder wheather that has a negative effect on other people and how I can avoid that. Esp. as I'm not feeling stressed myself.

A standard-issue meeting room for 12 people can comfortably host 24 people if tables and furniture are rearranged and comfy beanbags etc. are added.

Whe number of people showing up can vary unpredictably. This may depend on weather or how the event is communicated and unknown factors.

Visualize the concrete effects of your charity. This can give you a specific intuition you can use to decide whether it's worth it. Imma's example was thinking about how your donated AMF bednets hang over children and protect from mosquitoes.

There will definitely be a follow-up meeting of a comparable size in a few month (no date yet). And maybe smaller get-together will be organized inbetween.

A fully general counterargument [FGCA] is an argument which can be used to discount any conclusion the arguer does not like.

With the caveat that the arguer doesn't need to be aware that this is the case. But if (s)he is not aware of that, this seems like the other biases we are prone to. The question is: Is there a tendency or risk to accidentally form FGCAs? Do we fall easily into this mind-trap?

This post tries to (non-exhaustively) list some FGCAs as well as possible countermeasures.