Singularity Institute desperately needs someone who is not me who can write cognitive-science-based material. Someone smart, energetic, able to speak to popular audiences, and with an excellent command of the science. If you’ve been reading Less Wrong for the last few months, you probably just thought the same thing I did: “SIAI should hire Lukeprog!” To support Luke Muelhauser becoming a full-time Singularity Institute employee, please donate and mention Luke (e.g. “Yay for Luke!”) in the check memo or the comment field of your donation - or if you donate by a method that doesn’t allow you to leave a comment, tell Louie Helm (louie@intelligence.org) your donation was to help fund Luke.

Note that the Summer Challenge that doubles all donations will run until August 31st. (We're currently at $31,000 of $125,000.)

During his stint as a Singularity Institute Visiting Fellow, Luke has already:

Co-organized and taught sessions for a well-received one-week Rationality Minicamp, and taught sessions for the nine-week Rationality Boot Camp.

Raise awareness of AI risk and the uses of rationality by giving talks at universities and technology companies, as he recently did at Halcyon Molecular.

If you’d like to help us fund Luke Muehlhauser to do all that and probably more, please donate now and include the word “Luke” in the comment field. And if you donate before August 31st, your donation will be doubled as part of the 2011 Summer Singularity Challenge.

Your direct pleas for money often work the best against my Akrasia Eliezer. Maybe some new lack of fallacy's in my thinking has convinced me to give you money for many ideas that some consider foolish but here is a Benjamin for the cause.

So, no plans for providing any substantiation of the mini-camp's purported success? (Some want to know.) Or of people who have increased their level of life success as a result of the winning at life guides?

At the very least I would like to see some kind of plan for evaluating future ventures... it may be too late now for anything but post-hoc qualitative for the mini-camp (or mega-camp?). Except that we did take a survey just before the camp, whose general idea I remember but not the specific questions, and I think there are plans to send it out again 10 months from now. Unfortunately that survey is available online IIRC, but I won't go look at it.

I agree with 'more testing and evidence, please', but you often come across as adversarial and I think that generally makes it harder for you to convince the people you want to convince.

Well, I hope they're not relying on "Silas is a meanie" as their intellectual "covering fire" for not substantiating this claim. And it's not that I want more testing and evidence, I just want to see what they think proves its success.

As an aside, remember that the minicamp was a relatively unplanned event; it came about because SIAI had extra time and space.

True, but I wouldn't be asking for any of this if leaders didn't try to paint it afterwards as a major success. If they want to take a risky venture, fine. If they want to play, "I meant to do that", let's see what it accomplished.

I'm not talking about 'covering fire'. If your goal is to win an argument or appear righteous, then your strategy is alright. If your goal is to actually get SIAI to change their behavior, then your language is hurting your cause. You want to make it as easy as possible for them to change their behavior, and it's psychologically much easier to do something because an ally asks than because an adversary asks.

You have seen evidence: both Guy (link) and I (link) posted 'lessons learned' for the minicamp. You are right to say this is not especially strong evidence, but it is evidence. I think it would have been good to video tape some of the sessions and post them and post the exit surveys (they took testimonials too).

Sure, and I regularly do ("Well, if situation X seems like it would produce anecdote Y, then all anecdote Y shows us is that situation X happened, not that contention Z is necessarily true - only if situation X shows us that contention Z is true").

I would surmise that not all commentors are willing to be that forgiving.

And how else should I update after reading two self-selected, subjective assessments? This has a perfectly reasonable Bayesian interpretation.

EDIT: Also note that the grandparent was posted before AnnaSalamon actually fixed the problem at hand.

EDIT x2: And while I'm endlessly editing this comment, let me note that most of this drama could have been averted if someone had just posted the damn data instead of coming up with multiple, bad excuses. Lots of guilty parties, only a couple heroes (in my book, at least).

And how else should I update after reading two self-selected, subjective assessments?

Very little. I was explaining why your comment was downvoted so much. I said "technically inaccurate" as opposed to "wrong" because I am sympathetic to your point of view; it is almost no data. But it is a little bit of data.

remember that the minicamp was a relatively unplanned event; it came about because SIAI had extra time and space. I will be more concerned if the megacamp has a similar lack of testing.

As far as I can tell, the mega-camp actually had even less testing than the mini-camp. I did leave before the last week though, so I can't be sure quite what was done then. We had a discussion at the beginning about how we would decide if the camp had been a success, but I don't think we came to any very satisfactory conclusions.

We collected lots of data before and during minicamp. We are waiting for some time to pass before collecting followup data, because it takes time for people's lives to change, if they're going to change. Minicamp was only a couple months ago.

Minicampers are generally still in contact, and indeed we are still gathering data. For example, several minicampers sent me before and after photos concerning their fashion (which was part of the social effectiveness section of the minicamp) and I'm going to show them to people on the street and ask them to choose which look they prefer (without indicating which is 'before' and which one is 'after').

So yes: by all qualitative measures, minicamp seems to have been a success. The early quantitative measures have been taken, but before-and-after results will have to wait a while.

As for future rationality training, we are taking the data gathered from minicamp and boot camp and also from some market research we did and trying to build a solid curriculum. To my knowledge, four people are seriously working on this project, and Eliezer is one of them.

I'm curious, why you guys didn't post the testimonials or surveys you gathered at the end of the camp? Obviously these should be accompanied with appropriate caveats, but I think this would help explain to people why you are pleased with the results and think 'we're on to something'.

Sure, I'd love to! (I thought I didn't qualify to volunteer for SIAI?) Hand over whatever you have and I'll make sure they do it right! (I thought this administration has to be done by someone in the loop on this, but whatever.)

Oh, you were just hoping I'd drop it, and the issue of actual substantiation of the mini-camp's success (which lies at the top of your reasons for wanting to fund Luke) would die off? Can't help there.

I need to write up the results myself because I personally ran the minicamp with Anna Salamon and Andrew Critch. But I have tons of stuff I could have you do that would free up more of my time to get around to writing up results of minicamp data. If you're interested in helping, that would be awesome. You can contact me at lukeprog [at] gmail.

I actually have no interest in supporting your research. Every time I ask a clarifying question of any of your claims, you get extremely defensive and fail to answer it, which suggests a poor understaning of what you're trying to present results on. Also, every piece of advice I've followed falls woefully short of what you claim it does, and I don't seem to be alone here (on either point). I think your contributions are overrated.

(This is a large part of why I put such a low prior on claims of the minicamp's phenomenal success and was so skeptical of the report.)

I don't claim to have contributed more research, to LW, of course, but when I do present research I make sure to understand it.

In fairness, your more recent work doesn't seem to be subject to any of this, so I could very well change my opinion on this.

Every time I ask a clarifying question of any of your claims, you get extremely defensive and fail to answer it, which suggests a poor understaning of what you're trying to present results on.

Regarding the comment thread you linked to, I agree that the initial response you received was defensive and uninformative. I am not surprised to see it sitting at zero upvotes.

When you prodded further, you got a good response, so while I think you didn't come out badly in that exchange at all, I am surprised that you are citing it as evidence of lukeprog understanding poorly. It instead suggests that he responds defensively even when he does understand and has a cogent answer, and that defensiveness from him implies shallow understanding far less than it would from others.

I think your contributions are overrated.

I have little problem with bluntly telling people that they suck, and by extension don't mind less offending forthright communication, but I am leery of discussing people's work by evaluating how people's work compares to the popular perception of their work. It introduces an unnecessary factual dispute - how people are perceived.

E.g. "Loui Eriksson is the most underrated player in the NHL, just ask anyone! Wait a second...if everyone agrees, then..."

The comment you linked to doesn't seem like a clarifying question at all. I think that conversation might be another instance where your belligerent tone hurt communication even though your point was a valid one. Luke's answer didn't seem particularly defensive, either (although I have seen other conversations where his tone was defensive, so I won't challenge that point.)

I actually agree with you on this point (and upvoted all your questions about it), but the longer you argue the less sympathetic I'm getting. You asked when results would be published, and the answer was that it requires a lot of processing time. You asked if volunteers can help, and Luke answered that while volunteers can't help on that specifically, they can contribute to other things which will speed up the process. To which you answered:

I actually have no interest in supporting your research....I think your contributions are overrated.

Which just doesn't show a whole lot of interest in actually resolving the problem you're concerned about.

In the interest of optimizing our rationality I think that we need to continue to call out instances "community distancing" such as the one exhibited by Silas above.

The reason for doing so? It lets the dissenters know that a community can tolerate and appreciate criticism but not the creation of a lone wolf character. Lone wolves do not contribute to a community and instead impede our advances in rationality by drawing conversations back to their status. As such, their status seeking should be pointed out and skepticism should be attached to their future postings.

Passive-aggressive comments in particular are troublesome because these types eventually find ways to disrupt substantive threads by reminding others of their loner status and their unacknowledged genius. Their resentment then leads to them mocking key figures in a community (note Silas' comments to both Eliezer and Luke).

Perhaps LW needs a mini-sequence on acceptable and non-acceptable signaling within a rational community.

It lets the dissenters know that a community can tolerate and appreciate criticism but not the creation of a lone wolf character. Lone wolves do not contribute to a community and instead impede our advances in rationality by drawing conversations back to their status.

Let's taboo "lone wolf" and see what you actually mean by it, because I don't see Silas as a lone wolf figure in this debacle. For example, most of his comments have positive karma -- what I would consider a lone wolf wouldn't have such support.

As such, their status seeking should be pointed out and skepticism should be attached to their future postings.

I object. "Lone wolves", and Silas in particular, are not more status seeking than average. Luke's contributions are far more status seeking than Silas's are. Luke is good at status seeking while Silas's biggest weakness is that he fails to status seek when it would clearly be in his interests to do so.

I'm genuinely interested in seeing this data published, because I think it's something that a lot of people can build off of. If the only obstacle is really hours, I am happy to contribute.

I would be happy to show up in person while I'm in the area, pick up any paper notes you have available, transcribe them, and mail the originals back once finished. I have professional experience with data entry (including specifically product surveys) and market research in general. I'll be in San Francisco the afternoon of Monday, September 5th, hopefully around noon. I leave early Tuesday morning.

Great! Much of the minicamp data is private and anonymous, so I can't share that with you, but I definitely have tasks for volunteers to do that will free uo time for me to write up a minicamp report - some of those tasks are even directly relevant to minicamp. Please email me at lukeprog at gmail if you'd like to help.

Here are excerpts from the Minicamp testimonials (which were written to be shown to the public), with a link to the full list at the end:

“The week I spent in minicamp had by far the highest density of fun and learning I have ever experienced. It's like taking two years of college and condensing it to a week: you learn just as much and you have just as much fun. The skills I've learned will help me set and achieve my own life goal, and the friends I've made will help me get there.” --Alexei

“This was an intensely positive experience. This was easily the most powerful change self-modification I've ever made, in all of the social, intellectual, and emotional spheres. I'm now a more powerful person than I was a week ago -- and I can explain exactly how and why this is true.

At mini-camp, I've learned techniques for effective self-modification -- that is, I have a much deeper understanding of how to change my desires, gather my willpower, channel my time and cognitive resources, and model and handle previously confusing situations. What's more, I have a fairly clear map of how to build these skills henceforth, and how to inculcate them in others. And all this was presented in such a way that any sufficiently analytical folk -- anyone who has understood a few of the LW sequences, say -- can gain in extreme measures.”
--Matt Elder / Fiddlemath

“I expected a week of interesting things and some useful tools to take away. What I got was 8 days of constant, deep learning, challenges to my limits that helped me grow. I finally grokked that I can and should optimize myself on every dimension I care about, that practice and reinforcement can make me a better thinker, and that I can change very quickly when I'm not constrained by artificial barriers or stress.

I would not recommend doing something like this right before another super-busy week, because I was learning at 100% of capacity and will need a lot of time to unpack all the things I learned and apply them to my life, but I came away with a clear plan for becoming better. It is now a normal and easy thing for me to try things out, test my beliefs, and self-improve. And I'm likely to be much more effective at making the world a better place as well, by prioritizing without fear.

The material was all soundly-researched and effectively taught, with extremely helpful supplemental exercises and activities. The instructors were very helpful in and out of session. The other participants were excited, engaged, challenging, and supportive.

I look forward to sharing what I've learned with my local Lesswrong meetup and others in the area. If that's even 1/4 as awesome as my time at the Mini-Camp, it will make our lives much better.”
--Ben Hoffman / Benquo

“I really can't recommend this camp enough! This workshop broke down a complex and intertwined set of skills labelled in my brain as "common sense" and distinguished each part so that I could work on them separately. Sessions on motivation, cognition, and what habits to build to not fool yourself were particularly helpful. This camp was also the first example that I've seen of people taking current cognitive science and other research, decoding it, and showing people what's been documented to work so that they can use it too. It feels to me now as though the coolest parts of the sequences have been given specific exercises and habits to build off of. This camp, and the people in it, have changed my path for the better.”
--David Jones / TheDave

So it's early enough to call it an unqualified success, but too soon for evidence to exist that it was was a success? If I have to be patient for the evidence to come back, shouldn't you be a little more patient about judging it a success?

Edit: I gave a list of information you could post. The fashion part isn't suprising enough to count as strong evidence, and was a relatively small part of the course that, in any case, you previously claimed could be accomplished by looking at a few fashion magazines.

I mean 'evidence' in the Bayesian sense, not the scientific sense. I have significant Bayesian evidence that minicamp was a success on several measures, but I can't know more until we collect more data.

Thanks for providing a list of information we could post. One reason for not posting more information is that doing so requires lots of staff hours, and we don't have enough of those available. We're also trying to, for example, develop a rationality curriculum and write a document of open problems in FAI theory.

If you're anxious to learn more about the rationality camps before SI has time to publish about that data, you're welcome to contact the people who attended; many of them have identified themselves on Less Wrong.

I'm fairly confident that campers got more out of my fashion sessions than what they can learn only from looking at a few fashion magazines.

Great, so did I! Now communicate that evidence. If it can't be communicated, I don't think you should be so confident in it.

One reason for not posting more information is that doing so requires lots of staff hours

I find that hard to believe. It may take time for the participants to report back, but not for you to tabulate the results.

We're also trying to, for example, develop a rationality curriculum and write a document of open problems in FAI theory.

I'm sorry, but this just sounds like excuse-making. Do you want your audience to be people who just take your word on something like this? I've asked several times for some very simple checks. This claim that you're too busy just doesn't fly.

I'm fairly confident that campers got more out of my fashion sessions than what they can learn only from looking at a few fashion magazines.

Then why don't you mention this in your "how to be happy" post, which is also being used as evidence of your productivity? Do you know a single person who has improved fashion to an acceptable level as a result of those magazines?

(I disapprove of downvoting the parent (which I just found at '-2'). It continues the same conversation as the previous Silas's posts, pointing out what does look like rationalization. If raising a possibility of interlocutor's rationalizing in defense of their position is considered too rude to tolerate, we'll never fix such problems.)

I suspect that most of the downvotes came from the very last sentence, which struck me as more than a little snarky. "Cheers" might not be necessary, but it is a gesture of politeness and was probably added in an attempt to convey a positive tone (which is important but somewhat tricky in text). I wouldn't say "not necessary" if someone held the door for me, even if it is obviously true.

Agree with you that the actual substance of the post was in no way downvote-worthy.

I get annoyed by people who "sign" posts in the text like that, especially when they do it specifically on replies to me. It really isn't necessary. I'm interested in substance, not pleasantries, as I was three months ago when I asked how the mini-camp was a success.

Because "signing" comments is not customary here, doing so signals a certain aloofness or distance from the community, and thus can easily be interpreted as a passive-aggressive assertion of high status. (Especially coming from Luke, who I find emits such signals rather often -- he may want to be aware of this in case it's not his intention.)

I interpret Silas's "Not necessary" as roughly "Excuse me, but you're not on Mount Olympus writing an epistle to the unwashed masses on LW down below".

Because "signing" comments is not customary here, doing so signals a certain aloofness or distance from the community

No. I am very confident the intention was to signal that Luke was not being emotionally affected by the intense criticism for the purpose of appearing to be leader type material, which is substantially not aloofness from the community.

It's not a convincing signal primarily because it's idiosyncracy highlights it for analysis, but I still think the above holds.

I saw lukeprog's signing messages as minor noise and possibly a finger macro developed long ago, so I stopped seeing the signature.

I'd be dubious about assuming one can be certain (where's the Bayesianism?) about what someone else is intending to signal, especially considering that it's doctrine here that one can't be certain of even one's own motivations. How much less certain should one be about other people's?

I would add some further uncertainty if one feels very sure about the motivation driving a behavior that's annoying.

Great, so did I! Now communicate that evidence. If it can't be communicated, I don't think you should be so confident in it.

Here:

Surprisingly positive reviews in both qualitative and quantitative forms on our exit surveys.

Follow-ups with many individual minicampers who report that several of the things we taught have stuck with them and improved their lives.

People telling me to my face during minicamp that they were getting lots of value out of it.

Enthusiastic testimonials.

It may take time for the participants to report back, but not for you to tabulate the results.

No, it definitely takes time to tabluate the results and write a presentable post about the results. I've personally spent 3 hours on it already but the project is unfinished.

Do you want your audience to be people who just take your word on something like this?

Ah. I may not have communicate this clearly: I think your skepticism concerning the success of the minicamp is warranted because almost no evidence is available to you. You're welcome to not take my word for it. When I have another 5-10 hours to finish putting together the results and write a post with more details about minicamp, I will, but I'm mostly waiting to invest that time until I can do it most profitably, for example when we've gathered more 'after minicamp' data.

Then why don't you mention this in your "how to be happy" post, which is also being used as evidence of your productivity?

I don't understand. The 'How to Be Happy' post was written before I helped run minicamp. And, there are tons of things not mentioned in that post. That post barely scratches the surface of my thoughts on happiness, let alone research on happiness in general.

Do you know a single person who has improved fashion to an acceptable level as a result of those magazines?

I doubt magazines is ever the sole input on someone's fashion sense, but yes I know people who have improved their fashion as a result of following magazines (or fashion blogs; same thing basically). Ask Peter Scheyer about this, for example.

Honestly, I'm not sure. Having a randomized control group and then looking at actual success would be nice. Even without a good control, one obvious thing to do would have been to do before and after tests of similar questions that test for rational behavior (e.g. whether they can recognize they are engaging in the sunk cost fallacy and things like that). It may be that given the circumstances the best evidence we have is self-reporting like this. If so, it is evidence for the success of the minicamps. But, it is not very strong evidence precisely because it is consistent with a variety of other not implausible hypotheses. This thread has made me more inclined to believe that the minicamps were successful, but had not strongly increased my confidence.

Shouldn't these be the same? Bayesian evidence is surely scientific evidence - and visa versa. I don't see much point in multiplying definitions of "evidence". Let's just have one notion of "evidence", please. Promoting multiple "evdience" concepts seems to be undesirable terminology - unless there's a really good reason for doing so.

Actually, that helps. As a teenager, I noticed that most of the scientific method, including the key concept of experimentation, extended to personal knowledge and understanding. So, I did what seemed to be the obvious thing: I expanded my conception of science to include that domain. The public-only conception of science wasn't really much of a natural kind - since eventually technology would gain access to people's minds.

That explains why I don't get very much out of the Science vs Bayes material on this site. To me it just looks as though the true nature of science has not been properly grokked.

I must say, I still like my way: expanding the definition of science a teeny bit has a number of virtues over trying to stage a rationality revolution.

This comes off very strongly as the typical bureaucratic protectiveness - a business doesn't want to share raw data, because raw data is a valuable resource. If you came out and said this was the reason, I'd be more understanding, but it would still feel like a major violation of community norms to be so secretive.

If simple secrecy is indeed the case, I would urge you, please, be honest about this motive and say so explicitly! At least then we are having an honest discussion, and the rest of this comment can be disregarded.

We collected lots of data before and during minicamp

In short, what is the reason you can't share this RAW data, which you state you collected, and which you've presumably found sufficient for your own preliminary conclusions? I don't think Silas is asking for or expecting an elegant power-point presentation or a concise statistical analysis - I know I would personally love to simply see raw data.

Is there truly not a single spreadsheet or writeup that you could drop up for us to study while you collect the rest of the data?

Oh, sure. The reason is easy to communicate. We explicitly told minicampers that their feedback on the exit survey would be private and anonymous, for maximal incentive to be direct and honest. We are not going to violate that agreement. The testimonials were given via a separate form with permission granted to publish THAT data publically.

A summary of the data can be published, for example median scores for measured values. But the data can't be published in raw form.

Not sure if raw testimonial data has been published yet. We do have data beyond testimonials and exit surveys, but that, too, requires precious staff hours to compile and write up, and it is still in the process of being collected.

Good grief, people. There are conspiracies that need ferreting out, but they do not revolve around generating fake data about the effectiveness of an alpha version of a rationality training camp that was offered for free to a grateful public.

I went to the minicamp, I had a great time, I learned a lot, and I saw shedloads of anecdotal evidence that the teachers are striving to become as effective as possible. I'm sure they will publish their data if and when they have something to say.

Meanwhile, consider re-directing your laudable passion for transparency toward a publicly traded company or a medium-sized city or a research university. Fighting conspiracies is an inherently high-risk activity, both because you might be wrong about the conspiracies' existence, and because even if the conspiracy exists, you might be defeated by its shadowy and awful powers. Try to make sure the risks you run are justified by an even bigger payoff at the end of the tunnel.

The positive endorphin rush from you and lukeprog sends signals that loook just like the enthusiastic gushing I see from any week-long "how to fix your life in five easy steps!" seminar. Smart people get caught up in biased thinking all the time. I had a good friend quit AI research to sell a self-help book, so I may be particularly sensitive to this :)

Objective data means I can upgrade this from "oh bunnies, another self-help meme" to "oooh, fascinating and awesome thing that I want to steal for myself." As long as it signals like a self-help meme, I'm going to shoot it down just like I'd shoot down any similar meme that tried to sell itself here on LessWrong.

All right, but there's a fine line between shooting down self-help memes and unnecessarily discouraging project-builders from getting excited about their work. It's not fun or helpful for a pioneer to have his or her every first step be met with boundless skepticism. Your concerns sound real enough to me, but even an honest concern can be rude, and even a rationalist can validly trade off a tiny little bit of honesty for a whole lot of politeness and sympathy.

Why do I say "a tiny little bit of honesty?" Well, if the minicamp were being billed as "finished," "polished," "complete," "famous," "proven," or "demonstrably successful," as many self-help programs are, then it would make sense to demand data supporting those claims.

Instead, the PR blurb says that "Starting on May 28th, the Singularity Institute ran a one-week Rationality Training Camp. Our exit survey shows that the camp was a smashing success, surpassing the expectations of the organizers and the participants."

Leaving aside the colorful language that can and should characterize most press releases, this is a pretty weak claim: the camp beat expectations. Do you really need to see data to back that up?

There are conspiracies that need ferreting out, but they do not revolve around generating fake data about the effectiveness of an alpha version of a rationality training camp that was offered for free to a grateful public.

I don't think anybody is accusing the minicamp folks of anything of the kind. But public criticism and analysis of conclusions is the only reliable way to defend against overconfidence and wishful thinking.

When I ended my term as an SIAI Visiting Fellow, I too felt like the experience would really change my life. In reality, most of the effects faded away within some months, though a number of factors combined to permanently increase my average long-term happiness level.

Back then the rationality exercises were still being worked out and Luke wasn't around, so it's very plausible that the minicamp is a lot more effective than the Visiting Fellow program was for me. But the prior for any given self-help program having a permanent effect is small, even if participants give glowing self-reports at first, so deep skepticism is warranted. No conspiracies are necessary, just standard wishful thinking biases.

Though I think this was the third time that Silas raised the question before finally getting a reply, despite his comment being highly upvoted each time. If some people are harboring suspicions of SIAI covering up information, well, I can't really say I'd blame them after that.

It seems rather unlikely to me that being a mini-camp participant would have more of an effect on someone's life than being a Visiting Fellow, new techniques or not-- and if I am wrong, I would very much want to encounter these new techniques!

I wouldn't be that surprised. Explicit rationality exercises were only starting to be developed during the last month of my stay, and at that point they mostly fell into the category of "entertaining, but probably not hugely useful". The main rationality boost came from being around others with a strong commitment to rationality, but as situationist psychology would have it, the effect faded once I was out of that environment.

I find that unlikely. That would mean you never followed up after trumpeting your success -- you just posted the topic, and never bothered to come back and see what people had to say. And that you didn't see the top comment on the 125k fundraiser thread. Then again, this is consistent with what komponisto said about your "Mt Olympus" mentality: just say stuff ex cathedra and expect everyone to fall in line or otherwise swoon.

I don't understand this bit about my 'My Olympus' mentality. Until very recently I wasn't on SI's full-time staff. And as far as I can tell, I've spent vastly more time substantiating what I say by citing the relevant scientific literature (rather than relying on whatever personal authority I'm supposed to have, which I don't think is much at all) than anyone else on Less Wrong.

And no, I don't expect everyone to "fall in line or otherwise swoon." It's just that I don't have time to write up a 20-citation research article supporting every sentence I write. If the reasons that led me to write a certain sentence aren't available to you, as is usually the case, then you should only be updating your beliefs as much as you should given the evidence of my testimony, which in many cases should be very little.

As for you not believing me when I say that I don't recall reading your earlier comments calling for evidence about minicamp's success, well... the only evidence I have for you besides my testimony is that I hadn't replied to any of your earlier comments on the matter. If you don't believe me, well, so be it: that's all I've got.

For me, this isn't about making SIAI transparent; it does quite enough in that regard. It's about stopping an information cascade genie that's already out of the bottle.

Let me put it this way: right now the ratio of "relying on the assumption of mini-camp's success for decision making" to "available evidence for its success" is about 20-to-1. As I warned before, it's quickly becoming something "everyone knows" despite the lack of evidence (and major suspicions of many people that it wouldn't succeed going in). And that believe will keep feeding on itself unless someone traces it back to its original evidence.

It doesn't reassure me that I'm told I have to keep waiting before anything's conclusive, yet they can declare it a success now.

I just want the reliable evidence they claim to have, rather than just dime-a-dozen self-help testimonials. They collected hard data, and I gave them a list of things they could provide that are easy to gather and don't compromise privacy, and are much more likely to be present if the success were real than if it were not. Even after AnnaSalamon's circling of the wagons I don't see that.

Among the ultimate criteria for the minicamps is their impact on long-term life success. To assess this, both minicamp participants and a control group completed a long, anonymous survey containing many indicators of life success (income, self-reported happiness and anxiety levels, many questions about degree of social connectedness and satisfaction with relationships, etc.); we plan to give it again to both groups a year after mini-camp, to see whether minicampers improved more than controls. I’m eager to see and update from those results, but we’re only a couple months into the year’s waiting period. (The reason we decided ahead of time to wait a year is that minicamp aimed to give participants tools for personal change; and, for example, it takes time for improved social skills, strategicness, and career plans to translate into income.)

Meanwhile, we’re working with self-report measures because they are what we have. But they are more positive than I anticipated, and that can’t be a bad sign. I was also positively surprised by the number of rationality, productivity, and social effectiveness habits that participants reported using regularly, in response to my email asking, two months out. To quote a significant fraction of the numerical data from the exit survey (from the last day of minicamp), for those who haven’t seen participants’ ratings:

In answer to “Zero to ten, are you glad you came?”, the median answer was 10 (mean was 9.3).

In answer to “Zero to ten, will your life go significantly differently because you came to mini-camp?” the median answer was 7.5 (the mean was 6.9). [This was the response that was most positively surprising to me.]

In answer to “Zero to ten, has your epistemic rationality improved?”, the median answer was 7 (mean 6.9)

In answer to “Zero to ten, are you more motivated to learn epistemic rationality, than you were when you came?”, the median answer was 8.5 (mean 8.1)

In answer to “Zero to ten, have you become more skilled at modifying your emotions and dispositions?”, the median answer was 7 (mean 6.3).

In answer to “Zero to ten, are you more motivated to modify your emotions and dispositions, than you were when you came?”, the median answer was 9 (mean 8.3).

In answer to “Zero to ten, are you more motivated to gain social skills than you were when you came?”, the median answer was 8 (mean 7.7).

In answer to “Zero to ten, have you gained social skills since coming?”, the median answer was 7.5 (mean 7.2).

In answer to “zero to ten, did you like Luke’s sessions?”, the median answer was 9 (mean answer 8.7).

Some excerpts from the survey, about about Luke’s sessions in particular:

“Luke is an excellent presenter. These sessions exceeded my expectations: I am convinced I have under-valued social interaction and techniques and that I can accelerate my success curve by aggressively adopting them. ”

“I really liked Luke's sessions. They were fun and interactive and well put together. There is an effect of being a bit more personally interested in the material.”

“Very useful content. Great presentation of it. Very good at handling the practical camp-issues and also useful fashion tips.”

“Luke’s sessions were concise, and well structured. Good PPT templates!”

“The social effectiveness and fashion sessions were very useful for me. ”

“Some parts of some sessions i felt went too slowly... but mostly extremely valuable information. wish we could have more social skills sessions - i would take another camp just for these super low-hanging fruit.”

“Luke gave concrete examples and advice. It was very helpful.”

“Luke was great as a session leader. His sessions were very clearly, cleanly organized, and discussions in his sessions were handled very well. Luke has, by far, the presence to lead a discussion among 16 people. :)”

“Luke was great. His sessions hit the relevant points in an effective manner.”

“Luke was very helpful and knowledgeable. The pace of his sessions was really good, and there was a lot of room for discussion. Luke also gave some helpful and specific fashion advice. ”

“Pretty much everything with Luke was phenomenal... Luke really made this whole camp worthwhile. I know this is more praise than constructive feedback, but I legitimately can't think of anything!”

I worked on mini-camp with Luke, and I can honestly say that it’s only because of Luke that we were able to hold minicamp at all, and also that he was a phenomenal work partner in organizing the camp, getting all the logistics together, and generally making it a positive and, for many, life-changing experience.

More generally: In minicamp and other SingInst projects, Luke combines energy, reliable ability to carry projects to completion, and strategicness as to which projects make sense and which aspects of those projects are most worth the extra effort; if you’re looking to reduce existential risk, making it possible for SingInst to stably hire Luke seems to me to offer unusually good bang for your buck.

How was the control group selected? Did you select a pool of candidates larger than you could accept then randomly take a subset of these as a control? If not then calling it a 'control group' is borderline at best.

The prior expectation of the influence of one week of training on personal success over a year is far lower than that of various personal and environmental qualities in the individual. This being the case it is more reasonable to attribute differences in progress between the groups to the higher potential for growth in the chosen minicampers. This primarily reflects well on the ability of the Singinst rationality trainers to identify indicators of future success - a rather important skill in its own right!

A good point. The control group was of folks who made it through the initial screening but not the final screening, so, yes, there are differences. We explicitly discussed the possibility of randomizing admissions, but, for our first go, elected to admit the 25 people we most wanted, and to try randomizing some future events if the first worked well enough to warrant follow-ups (which it did). It is a bit of a hit to the data-gathering, but it wasn't growth potential as such that we were selecting for -- for example, younger applicants were less likely to have cool accomplishments and therefore less likely to get in, although they probably have more growth potential -- so there should still be evidence in the results.

Also, we marked down which not-admitted applicants were closest to the cut-off line (there were a number of close calls; I really wished we had more than 25 spaces), so we can gain a bit of data by seeing if they were similar to the minicamp group or to the rest of the controls.

I have a real hard time deciding how seriously I should take this survey.

The halo effect for doing anything around awesome people like are found in a selected group of Lesswrongians is probably pretty strong. I fear at least some of the participants may have mixed up being with awesome people with becoming awesome. Don't get me wrong being with awesome people in of itself will work ... for a while, until you leave that group.

I'm not that sceptical of the claims, but from the outside its hard to tell the difference between this scenario and the rationality camps working as intended.

Don't get me wrong being with awesome people in of itself will work ... for a while, until you leave that group.

I'm not that sceptical of the claims, but from the outside its hard to tell the difference between this scenario and the rationality camps working as intended.

Indeed. SIAI is conducting a year-later follow up which should provide the information needed to differentiate. Answering that question now is probably not possible to the degree of certainty required.

Sure, but Konkvistador's post is about how the survey might be contaminated by awesome-people-halo-effect, not that we shouldn't be calling it a success. That's a separate concern addressed elsewhere. My post was addressing how we would tell the difference between "working" and "near awesome people".

Yes; what I meant by "success" was more like a successful party or conference; Luke pulled off an event that nearly all the attendees were extremely glad they came to, gave presentations that held interest and influenced behavior for at least the upcoming weeks, etc. It was successful enough that, when combined with Luke's other accomplishments, I know we want Luke, for his project-completion, social effectiveness, strategicness, fast learning curves, and ability to fit all these qualities into SingInst in a manner that boosts our overall effectiveness. I don't mean "Minicamp definitely successfully created new uber-rationalists"; that would be a weird call from this data, given priors.

I suspect that it's precisely because of concerns like these that they didn't present these numbers until now. It's hard to see what other evidence they could have for the efficacy of the "minicamp" at this stage.

(Edited to replace "bootcamp" with "minicamp" as per wedrified's correction)

You're right to suspect that this could have happened. That said: I was a mini-camp participant, and I actually became more awesome as a result. Since mini-camp, I've:

used Fermi calculations (something we practiced) to decide to graduate from school early.

started making more money than I had before.

started negotiating for things, which saved me over $1000 this summer.

begun the incredibly fucking useful practice of rejection therapy, which multiplied my confidence and caused the above two points.

rapidly improved my social abilities, including the easily measurable 'success with women' factor. This was mostly caused by a session about physical contact by Will Ryan, and from two major improvements in wardrobe caused by the great and eminent lukeprog (in whose name I just donated). I wasn't bad at social stuff before - this was a step from good to great.

resolved my feelings about a bad relationship, mostly as a result of boosted confidence from increased social success.

I stuck around in California for the summer, and gained a lot from long conversations with other SIAI-related people. The vigor and insight of the community was a major factor in showing me how much more was possible and helping me stick to plans I initiated.

But, that said - the points listed above appear to be a direct result of the specific things I learned at mini-camp.

One of the many things I updated on as a result of the 9-week bootcamp is the importance of tone. I'm sympathetic to your data-crusade, but the way in which you're prosecuting it is leading me to dislike you.

You've made a number of posts indicating that you place a high priority on finding and joining a rationalist community. That will be more difficult if you are generally perceived by rationalists as a hostile conversationalist; you should be more strategic about achieving your goals.

Should this really be under main, and promoted at that? My impression was that main posts, and especially promoted ones were supposed to be reserved for posts discussing rationality and its applications, meant to be held up as examples of our best work. I have nothing against lukeprog, the SIAI, or this effort, but I don't think this really qualifies.

Yes, absolutely. Calls for action in support of causes associated with this site are material for promoted front page articles. It is valuable to send the message that participating, actually donating money rather than just thinking "That's nice, SIAI is hiring another research fellow", is important by giving prominence to the announcement.

Calls for action in support of causes associated with this site are material for promoted front page articles.

But there aren't causes associated with the site. If the site were about promoting the SIAI, I would agree with you. But LessWrong is about rationality, not promoting SIAI, even though those two sometimes coincide.

It is valuable to send the message that participating, actually donating money rather than just thinking "That's nice, SIAI is hiring another research fellow", is important by giving prominence to the announcement.

I can't disagree more. That's just telling people a bottom line. We need to be promoting articles that say why the SIAI is doing good work, and discussing the rationality behind supporting it. Not just telling people that, "Hey, there's this organization we think is great and you should donate to it."

That is simply false. LW was created by SIAI with the purpose of generating rationalists interested in reducing existential risks, and accepting and even encouraging that it might produce rationalists interested in other causes.

In the early days, we specifically avoided talking about SIAI, FAI, and existential risks because we didn't want shiny discussions about those topics to overwhelm our work on rationality. Now that we are more established, we no longer do that. From the beginning, that policy was meant to be temporary.

We need to be promoting articles that say why the SIAI is doing good work, and discussing the rationality behind supporting it.

False dichotomy. We have had lots of discussions about the effectiveness of SIAI. We can also have announcements of when they have a project that needs funding. There are people here who are already convinced SIAI is an effective charity worth supporting, but need some encouragement to actually support it. That is why this kind of announcement is important.

You're right. That was a horribly crafted sentence, in many ways. They are clearly associated. But the site is about rationality, not the SIAI. That was my point. (The statement is also patently false if you take "rationality" as a cause, which is entirely reasonable.)

False dichotomy. We have had lots of discussions about the effectiveness of SIAI. We can also have announcements of when they have a project that needs funding. There are people here who are already convinced SIAI is an effective charity worth supporting, but need some encouragement to actually support it. That is why this kind of announcement is important.

Once you have 20 or more karma points, you're allowed to make posts to the main community blog. (Click 'Create New Article' and change 'Post to' to 'Less Wrong'.) This section is intended for posts about rationality theory or practice that display well-edited writing, careful argument, and new material.

Once you have 20 or more karma points, you're allowed to make posts to the main community blog. (Click 'Create New Article' and change 'Post to' to 'Less Wrong'.) This section is intended for posts about rationality theory or practice that display well-edited writing, careful argument, and new material.

That is in the context of telling people new to the site what sort of article they should write if they want to publish in main, and it describes the primary usage, but it is not comprehensive. The actual use of the Main section does include this sort of announcement. It is normal, generally accepted by the community, and has been going on since LW split into Main and Discussion sections.

In general you will find that the actual use of many things in life does not match up with original intentions or simplified descriptions.

Judging by the upvotes of my original comment, there is not as much unity on that point as you seem to believe.

And frankly, the context doesn't matter. If posts like these are acceptable, than that statement is patently false, and should be changed. If it is not false, then posts like this are inappropriate on main. But with as clear as that statement is, there is no room for consistency between it and a post like this being in main.

In general you will find that the actual use of many things in life does not match up with original intentions or simplified descriptions.

That's an incredibly patronizing tone to take, and I don't appreciate it.

But putting that aside, this is largely irrelevant statement (And its irrelevance only serves to highlight the insult). I don't disagree. But what bearing does that have on what should be? Should we attempt to describe what types of posts are acceptable in main accurately, or should we not? There may be arguments on both sides. But that's not one of them.

If posts like these are acceptable, than that statement is patently false, and should be changed. If it is not false, then posts like this are inappropriate on main.

What does the description "is acceptable" refer to? Acceptable by what criterion? The real question is whether things like this should be encouraged or discouraged, using whatever methods are at our disposal, including establishing a policy for moving "off-topic" posts out of Main. Instead, you seem to be appealing to an existing social attitude, which shouldn't be a major factor, as it too can be influenced by good arguments and other means.

Continue to develop his metaethics sequence, the conclusion of which will be a sort of Polymath Project for collaboratively solving open problems in metaethics relevant to FAI development.

This in itself, well-run, is worth a salary in expectation. Luke, have you talked to Michael Nielsen or Tim Gowers or Terrence Tao or Gil Kalai about things that worked or didn't in the polymath projects they've run? I'm certain at least Nielsen will be very interested.

I'm confused. Why are donations for this separate from other kinds of SIAI donations? Some kind of psychological trick to increase total donations? Or will his wage be directly proportional to how much has such comments?

From the article, my semi-guess is that the SIAI has a number of things they'd like to fund, one of which is hiring Luke. If you think that hiring Luke is more important than other things the SIAI could be doing with immediate income right now, then you can specify that in your donation and make it happen.

Which brings up a question: what are the other things the SIAI would be spending new money on other than hiring Luke? Nothing against Luke, who is clearly awesome, but it's hard to build an ordered list of preferences if every choice but one is labelled "???".

Also, will Bitcoin donations be matched by their dollar exchange rate?

I came up with a better idea than waiting for an answer to this: I just donated 2.74 bitcoin to SIAI. According to MtGox, the last trade was at a price of $10.8 per bitcoin, so that works out to about $30. If you are reading this, I encourage you to take the initiative and match this contribution yourself.

This really should have been done as a Kickstarter project. If SIAI suddenly decides it doesn't have enough money to fund lukeprog, what is going to happen to the people donating "unrestricted" but with the intent to fund lukeprog? Why should SIAI waste resources administrating the fundraiser while a perfectly good third-party product exists?

Had already donated; but learning this prompted me to add a monthly donation as well.
What makes it qualify as a 12-month-pledge though, as opposed to an indefinite-period-direct-debit? I didn't see any language to that effect. Is there a super-secret form I should have used?

When I called the contact number, they said I should use the monthly donation method, and it would be counted as such. I asked if I should put it in the comment field, and they said that would help clarify things. My comment read, "I pledge to donate $X per month for 12 months."

The only place I ever saw it mentioned was a small sentence in an email that went out only to those on the SingInst mailing list. I don't know if they planned on doing it that way from the start, especially since they didn't seem to think about or realize the logistical problems around 'making a pledge', or monitoring follow-through (what happens if someone cancels after one month?), etc.

The Singularity Institute for Artificial Intelligence, the nonprofit I work at, is currently running a Summer Challenge to tide us over until the Singularity Summit in October (Oct 15-16 in New York, ticket prices go up by $100 after September starts). The Summer Challenge grant will double up to $125,000 in donations, ends at the end of August, and is currently up to only $39,000 which is somewhat worrying. I hadn't meant to do anything like this, but:

I will release completed chapters at a pace of one every 6 days, or one every 5 days after the SIAI's Summer Challenge reaches $50,000, or one every 4 days after the Summer Challenge reaches $75,000, or one every 3 days if the Summer Challenge is completed. Remember, the Summer Challenge has until the end of August, after that the pace will be set. (Just some slight encouragement for donors reading this fic to get around to donating sooner rather than later.) A link to the Challenge and the Summit can be found in the profile page, or Google "summer singularity challenge" and "Singularity Summit" respectively.

Hiring Luke full time would be an excellent choice for the SIAI. I spent time with Luke at mini-camp and can provide some insight.

Luke is an excellent communicator and agent for the efficient transmission of ideas. More importantly, he has the ability to teach these skills to others. Luke has shown this skill publicly on Less Wrong and also on his blog, with this distilled analysis of Eliezer's writing "Reading Yudkowsky."

Luke is a genuine modern day renaissance man, a true polymath. However, Luke is very self-aware of his limitations and has devoted significant work to finding ways of removing or mitigating those limitations. For example any person with a broad range of academic interests could fall prey to never acquiring useful skills in any of those interest areas. Luke sees this as a serious problem of concern and wants to maximize the efficiency of searching the academic space of ideas. Again, for Luke this is a teachable skill. His session "Productivity and Scholarship" at minicamp outlined techniques for efficient research and reducing akrasia. None of that material would be particularly surprising for a regular reader of Less Wrong -- because Luke pioneered critical posts on these subjects. Luke's suggestions were all implementable and process focused, such as utilizing review articles and Wikipedia to rapidly familiarize one's self broadly with the jargon of a new discipline before doing deep research.

Luke is an excellent listener and has a high degree of effectiveness in human interaction. This manifests itself as someone you enjoy speaking to, who seems interested in your views, and then who is able to tell you why you are wrong in a way that makes you feel smarter. (Compare with Eliezer, who will simply turn away when you are wrong. This is fine for Eliezer, but not ideal for SIAI as an organization.) Again, Luke understands how to teach this skill set. It seems likely that Luke would raise the social effectiveness of SIAI as an organization and then also generate positive affectations toward the organization in his dealings with others.

Luke would have a positive influence on the culture of the SIAI, the research of the SIAI, and the public face of the SIAI. Any organization would love to find someone who excels in any one of those dimensions, much less someone who excels in all of them.

Mini-camp was an exhausting challenge to all of the instructors. Luke never once showed that exhaustion, let it dampen his enthusiasm, or let his annoyance be shown (except, perhaps, as a tactical tool to move along a stalled or irrelevant conversation). In many ways he presented the best face of "mini-camp as a consumable product." That trait (we could call it customer focus or product awareness) is a critical skill the SIAI is lacking.

An example of how Luke has changed me. I was only vaguely aware of the concepts of efficient learning and study. Of course, I know about study habits and putting in time at practice in a certain sense. These usually emphasize practice and time investment (which is important) but underemphasize the value of finding the right things to spend time on.

It was only when I read Luke's posts, spoke to him, and participated in his sessions at mini-camp that I received a language for thinking about and conducting introspection on the subject of efficient learning. Specifically, I've applied his standards and process to my study of guitar and classical music and I now feel I've effectively solved the question of where to spend my time and am solely in the realm of doing the actual practice, composition, and research. I've advanced more in the past few months of music study than I have ever done in the prior year and a half I played guitar.

In the past month I have actively applied his skill of skimming review material (review books on classical composers) and then used wikipedia to rapidly drill down on confusing component subjects. In the past month, I have actively applied his skill of thinking vicariously about someone else's victory that represents goals I have to make a hard road seem less like a barrier and more like a negotiable terrain. In the past month, I have applied his skill of considering the merits of multiple competing areas of interest, determined the one with the most impact, and pursued it (knowing I could later scoop up the missing pieces more quickly).

I did all of that with the awareness that Luke was the source of the skills and language that let me do those things.

Was planning on waiting 'til the last day to decide with maximum info (in particular, whether the maximum match amount was met). If enough other people think like me, SIAI should see a rush of cash in the last few days of the contest.

But Eliezer forced my hand with this from MoR:

Thus this fic will next update at 7pm Pacific Time, on August 30th 2011, unless the Summer Challenge reaches $50,000 or more before then, in which case the fic will update sooner (but still at 7pm, because I'm not cruel).

By doing that, you gain "maximum info" for yourself while denying it to others; in particular, if there's a last-minute rush then anyone donating before the end of that rush may well be misled about whether the limit is likely to be reached.

It's not necessarily wrong to favour yourself over others. But it seems a bit weird to do so in the context of a charitable donation...

By check? Can you PM or email me with the name? The reason I ask is so that I can figure out how close HPMOR is to the 4-day update threshold, add it into my calculations in advance, and make sure it doesn't get double-counted when the actual check arrives. (BTW, do you want credit with my thousands of fanatic readers for bringing the threshold closer?)

I think a lot of the hubbub in this thread is due to different interpretations of SIAI related folks saying that the minicamp was 'successful'. I think many people here have interpreted 'success' as meaning something like "definitely improved the rationality of the attendants lastingly" and I think SIAI folks intended to say something like "was competently executed and gives us something promising to experiment with in the future".

Feels counterintuitive, but if just 50 people establish arrangements like this one, SingInst gets a reliable supply of funding on the current spending level independent of funding rallies or big-sum donors.

50 people is a lot. Certainly a large number of people here simply cannot afford that sort of commitment to any charity. There are a lot of grad students here for example, some of whom are getting less than that for their monthly salaries. In fact when I saw Rain's comment my first thought was "how the heck does Rain have that kind of money?"

my first thought was "how the heck does Rain have that kind of money?"

Low cost of living and a good job. I've always wondered the opposite, "how the heck does nobody else have any money?" I have so much left over every month, I wondered what to do with it for a long time before deciding on making a better future in the best way I know how.

How do you go about having a low cost of living? (I think I know how one goes about having a good job.) My best attempts at being a total cheapskate still have me spending my whole 8000 SEK monthly income. Okay, sure, I eat at cafeterias rather than packing lunch, and I buy fresh vegetables rather than eat lentils everyday, but you're a freaking fashion plate!

For comparison, Julia Wise and Jeff Kaufman spend $22K/year (i.e. 12000 SEK/month, 6000 each), and I guess it's cheaper to be a couple than two single individuals. (FWIW, I spend about as much as each of them, but I live in Italy -- it would have been very hard for me to live on that little in Ireland.)

Saying "the rationality minicamp was highly successful" before you have analyzed the data you have gathered to assess the success of the rationality minicamp is irrational.

If success at the minicamp is important - suggested by it listed first on Eliezer's recommendation - why not wait until you CAN analyze the data, to see whether it really was successful, before you recommend hiring Luke? Doing so means a) you can make a more persuasive case to donors, and b) if the minicamp WASN'T successful, then one can reconsider the hire.

The fact this plug happened before the analysis signals Eliezer is committed to recommending Luke's hire regardless of whether analysis shows the minicamp as successful or not. And if HE doesn't think the minicamp success is relevant to the merits of hiring Luke, why is he using it to persuade us?

Disclaimer: I think Luke has added lots of value to this site, and I would be unsurprised if later transparent analysis showed the minicamp to be highly successful. But the OP (as well comments switching various reasons/excuses for failing to present data, etc. etc.) is suggestive of irrational salesmanship. Perhaps a salutatory lesson that rationality experts still succumb to bias?

Saying "the rationality minicamp was highly successful" before you have analyzed the data you have gathered to assess the success of the rationality minicamp is irrational.

I am in the mind of Einstein's Arrogance here. The people involved in the camps received a lot of evidence that (obviously) isn't available to us, because we weren't there. They might have enough evidence to be convinced that it worked already - but of course, they also set up data-gathering mechanisms so that they could have enough evidence to convince people who necessarily don't have access to the the physical experience of the camp that it worked.

I expect that this is the case, and Eliezer sees absolutely no problem with citing this as something in favour of Luke's hire.

I was at the camp. It was spectacularly awesome in my judgement, too, and Luke was a big part of that. \end{soft.bayesian.evidence}

Specifically, the camp is tied for the title of the most life-altering workshop-like event of my life, and I've been to many such events, inside and outside academia (~3 per year for the past 10 years). The tie is with the workshop that got me onto my PhD topic (graphical causal modelling), so that's saying something.

I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"

But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...

All of this is surprisingly effective in overcoming my akrasia, esp. Nisans way, so on top of my donation I wanted to subscribe monthly- unfortunatly it seems that a credit card is neccesary for that. Any ideas how to circumvent this? I do not want to get a (regular) creditcard.

I've been low on cash recently, so I can't donate as much as I've used to, but I resumed the 10 EUR/month regular donation I previously had going on and which got cancelled for some reason which I forget. (Hopefully, I should be keeping that going indefinitely.)

I felt a remarkable resistance to donating such a small amount, feeling that it's even more embarassing than not donating at all. But then I came to my senses and figured I'd post this comment to encourage others to also make a small donation rather than no donation. If you're considering giving a sum which seems too small to be worth it, there's no shame in that! I did it too!

Also, go Luke! You're awesome, and have the kind of amazing energy to work on these issues that I can only dream of having. Just be careful not to burn yourself out.