Thanks for sharing your experiences and the warning with it (this is the type of post I’d like to promote!), though I predict I’ll do well in this program due to what TurnTrout said in the other comment: I enjoy a lot of what I’m doing! * actually considers each item *… yep! This is honestly what I’d rather be doing than a lot of things, so I feel like Nate Soares in that regard (in his post I linked).

Regarding my why/​motivation/​someone to protect, I’m going to leave that for a separate post. I wanted this one to be a short & to the point intro. My why post will be much more poetic and wouldn’t fit here, and to separate it more cleanly, I’m referring to a terminal goal here.

Though I would love to clarify my instrumental goals to achieve that terminal goal! Those are those 3 bullet points “better self-model, feedback, & self-improvement”.

Better self-model: I would like to ~maximize my usefulness which would require working hard for several years (So closest to “productivity/​ biological limits”). Getting the most bang for my buck those years involves finding a sustainable sprint/​jog, so I’m making predictions and testing those predictions to get a more accurate self-model.

Self-improvement: I feel lacking in math and technical knowledge of open-problems in AI safety (as well as how progress has been made so far).

I initially interpreted Mitchell’s as mocking as well, but on a second...third read I interpreted it as:

A reference to a common text book theme “An exercise for the reader” combined with the title of this post. Meant as a funny-joke-but-would-also-be-really-cool-if-someone-actually-did-it. This is just speculation though!! (Is this 100% correct Mitchell?)

I greatly appreciate you standing up for me though!!

If my speculation is correct, then I think the reason both you and I originally interpreted it as mockery would be the “those who are a little more advanced” part (meant as hyperbole) and the “*actually*” part.

I’m in a Discord server with a lot of people who have applied, and as far as I can tell, none of us have received an answer yet! One of the MIRI team told us that they’re trying to decide on who, but logistics with multiple decision makers is making it take a bit!

Thanks for mentioning Duncan’s I want to be healthy, and I deserve rest. That one resonated with me, so I immediately did it with Hardcore Comet King and I’m a human too who deserve’s comfort. Situation: Taking a cold shower to be more focused when meditating.

===============================================

Comfort: *inner scream* cold showers suck!! I don’t like it at all.

Comet King: Yes it sucks, but it’s only a temporary discomfort until ai takeoff, and then you can have all the comfort you could want.

Comfort: …

Me: Comfort could you summarize what was said?

Comfort: Cold showers suck, but once ai takeoff happens, I’ll have a lot of comfort.

[Then I remembered death]

Comfort: *inner scream*

Comet King: If you’ll allow these smaller discomforts, we’ll have a greater chance at avoiding the greater discomforts.

===============================================

And then I took cold shower.

(I don’t feel like I fully captured the conversation, and I feel it had some more dialogue)

I’m not too sure about how to mesh this idea (IDC /​ fusion) & meditation, specifically noticing intentions. Like I can notice “aversion of taking a cold shower” and focus on it until it fades and goes away, OR I can do IDC/​fusion where those aversions/​thoughts won’t show up in the first place.

I would say the second one is better, but I’m a novice in both of those so I might be mis-representing them. There might also be different relationships between those ideas that I’ve completely missed.

The very top of the post lists several bullet points of “the good” that would happen to you if you had this skill. Is that what you were asking for? Or were you asking for a personal life example, “I used to do [thing], but I gained this skill and now I do [better thing]”. If the latter, then he has a story’s tab for his emotional processing post, and I assume he’ll eventually have a story tab for this post as soon as someone sends him a personal story.

I’ve read through your series so far, and I don’t believe your writing quality has dropped. Eliezer’s inadeuqacy sequence went from 200 to 50 karma from beginning to end, and you’ll see the same drop in views in youtube multi-part videos. I believe it’s just barrier-to-entry with each additional post in a sequence because you have to read the other ones first. Posting individual posts and then compiling them as a sequence sounds like a good solution. Have you done a Yoda timer on this yet? lol

I would like to see the dark side technique, which has been stated at Ziz’ blog here and has a basis in Nate Soares’ guilt series. Probably related to goal factoring and internal double crux just by the sound of those topics. If I was to summarize it, it’d be “Never do anything unless you know how it benefits you”

What are your experiences of the “rationalist uncanny valley”? I would assume sunk cost fallacy fallcy you mentioned, but is there anything else? For me personally, it would be “expending too much social capital for truth’s sake” and the above dark side technique. Both of these came from taking those ideas (Truth and Dark Side) seriously, actually trying them in real life, and overdoing it in wrong ways. I did learn from those experiences and am better for it, so trying, failing, learning, repeating was overall beneficial. I assume that’s what you would call the uncanny valley?

If so, to improve it would be to improve that feed-back cycle. Anything that increases trying, minimizes failing, and provides better feedback is a possible research avenue. From your own series (and a couple extra):

For me, when a tangent conversations starts to die out, I literally say “So… what do you think about [previous topic]?”. The other person will usually laugh, probably because they didn’t even realize that they went off on a tangent.