I really enjoyed reading this post, and I'm pleased to see an effective altruist making a case for family planning as an effective cause. You sound particularly well informed about development issues - have you laid out your personal or professional background somewhere? I've asked to join your Effective FP facebook group

What do you think is the most promising graduate degree(Masters) for (pure) Math undergrads pursuing earning to give? And maybe more general; how would you structure the answer to the question of what graduate degree one should pursue given that he/she is commited to EtG? Maybe to give you some ideas, I'm currently looking at: computer science(machine learning), econometrics, economics, applied mathematics, financial mathematics, statistics, something related to data science.

Thanks for the reply! I don't know just yet what kind of advocacy I'd be doing - I would hope to figure that out as I went along. Maybe that's a point against PPES, but maybe even being an effective-altruism-minded public figure of any sort would do some good? The PPES degree is only a few years old so there's no real data on where people end up, but similar degrees at York and Oxford list finance among a broad range of commonly chosen careers (http://www.ppe.ox.ac.uk/index.php/a-future-with-ppe) (https://www.york.ac.uk/pep/graduate-profiles/). I suppose this would make PPES the broader option, allowing me to change direction later on. Do you feel that this characteristic is more valuable than speeding up my entry into a potentially high-impact position?

Regarding the universities, all the students I've talked to in both seem to love their respective universities, but Trinity is ranked higher and is much better known internationally.

It will be easier to get a job in almost any sector with a degree from Trinity rather than a degree from Galway (particularly outside Ireland), you will probably meet more interesting/driven people there, and you can try to make your PPES degree more quantitative if you want through particular choices (eg the econometrics option in third year economics or quantitative methods in fourth year economics), although it is certainly too early to be making specific choices about modules at this stage!

As others have said, it will also keep your options broader, which is valuable for all of us but particularly those of us who are still trying to work out what we are particularly good at.

Ryan is a more experienced programmer/coder than I am. As a time-poor beginner, I found the MITx course on R much, much easier to use (and far more interesting) than the John Hopkins courses on Coursera. They also have two decent courses on Python, the second of which is more relevant to statistical applications.

As someone who really admired George Monbiot as a teenager, I'm slightly surprised to hear him described as an effective altruist in spirit.

I admire his transparency and his willingness to change his mind, but he does strike me as someone quite committed to an ideology (generally a progressive one not too far from my own!) around issues like state intervention and ownership/delivery of public services. I'm also not convinced that rewilding is a promising or cost effective way of tackling the environmental issues which he (quite possibly rightly) prioritises so much. I'm not saying I don't think he is a good person, but I am saying it seems a stretch to think of him as an effective altruist in spirit. Do you know him personally?

I actually think the argument in his piece is pretty good as someone who works in one of the industries he is upset about. I can think of several friends, none of whom would consider themselves effective altruists, who have indeed followed the sort of path he outlines. All too often, people do not make differences from the inside.

I take your point that there are counterexamples where people do good from inside (I would hope to consider myself here, as someone donating 15%+ and triggering donations from colleagues worth around twice that last year) but as a general phenomenon his piece is pretty sound. A rebuttal would be difficult, but a response could go along the lines of "Not all City workers" or similar. Do you think this would still be valuable?

Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I'm happy to pay that myself next year to avoid some of the data quality issues.

It does seem clearly to be worth this expense. I'm concerned that .impact/the community team behind the survey are too reluctant to spend money and undervalue the time relative to it. I suppose that's the cost of not being a funded organization.

asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg "Please enter the number '2' in letters into the textbox to prove you are not a robot. For example, the number '1' in letters is 'one'")

Seconded - I'd urge the team to do this, even if it means ignoring some genuine answers (I would expect Effective Altruists to generally put enough effort into the survey to spot and complete this question, though I might be naïve).

Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey.

An excellent suggestion also. I'd be willing to do this - I imagine anyone else who'd volunteer can comment below and hopefully someone from the team will spot this and send messages.

My main additional comment to the below is that we should be relatively unconcerned with people failing to finish a long survey - we are talking to individuals who are committed to doing a significant amount of good in the world. The relative cost of a few extra questions is low compared with the cost of missing out a question which allows us to better understand the movement and therefore change the world.

I'm going to reproduce a comment I wrote at the time the 2014 results were released in order to have them on the agenda for the call later on. I remain convinced that each of these three practical suggestions is relatively low effort and will make the survey process easier, the data more reliable and any resulting conclusions more credible:

Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I'm happy to pay that myself next year to avoid some of the data quality issues.

Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg "Please enter the number '2' in letters into the textbox to prove you are not a robot. For example, the number '1' in letters is 'one'")

Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won't necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.