As it stands, I don't think that Effective Altruism (EA) fully describes my ethical values, let alone my other personal values. So, I wanted to write something that does fully describe my values.
I really like this post! It's nice to have your views spelled out so explicitly. I found myself agreeing with a lot of this and pretty confused about other bits. Some of the things that I most want answers/clarification for:
_______________________________
"I think atheism is the most important first step to take if you're trying to help the world. I'm an antitheist."
This seems like a very strong statement. Surely one could believe in some deistic god/the "god of philosophers" and this wouldn't affect their moral views very much! Also, if no god implies moral anti-realism, then it seems like becoming an atheist would NOT be an important step towards helping the world because "helping the world" is no longer a meaningful concept. Maybe I am misunderstanding the realism-anti realism debate though.
____________________________
"I have a whole paper written up on why I don't want to have children. If you're interested in reading it, message me!"
I have some intuitions against antinatalism, so I am very interested in reading this!
I'll probably write a whole post on this eventually sometime this year but. tldr; (or is it tl;dr??) I used to be mormon. I think we talked about this a little bit. When you're religious, your ONLY goal as a rational person is to follow your religious goals (get into heaven, achieve nirvana, etc). Your secondary goal, if you're trying to do good, is to convert as many people as possible to your religion (things kind of fall apart for Judaism, but Judaism still has many commandments). Most atheists actually probably do less good for the world? Many rationalize for "fun" or something adjacent. But if you want to be maximalist, it's really difficult to do so if you're religious. *Especially* if you want to be suffering-focused. Ex. Christians and Muslims have insane ideas of hell, which are taken to be completely OK. It's sort of like, I think it's really difficult to be suffering-focused if you're not vegan, or to actually care about animal suffering if you're not vegan. It's certainly possible, but there's something about actually changing your life here that's quite powerful.
For something like mormonism, which is easily falsifiable, I think it's incredibly difficult to be rational about much of anything. Some religions are less falsifiable.
I care about preventing the long-term suffering of beings in the universe. Most religions don't have moral circle expansion (veganism is laughed at, unless you're some sects of Buddhism or Hinduism). Most religions believe in some sort of Armageddon. 0 religions precipitate AI catastrophe. You might still be able to think of these things if you're religious (google Mormon Transhumanists) but only through weird loopholes.
I don't think these thoughts were all that coherent, but hopefully that helps. I hope to better explain this in a future post.
"Human capacity for suffering is higher because we're more complex individuals. After thinking about things for a while, I don't think this argument makes sense. [...] It seems like, if you squish an ant and it runs wildly in pain, that pain seems to be quite equivocal to human pain."
I think this misses the mark. Yes, there are some physical forms of suffering that are comparable across species, and in that sense ant pain = human pain. But having complex emotions and a social nature as a species leads to many unique and new forms of suffering (think "a mother grieving for her miscarried child" or "someone with severe depression"). Humans can experience intense emotional suffering along with intense physical suffering. I imagine that as brain structure gets more complex, there are more dimensions along which suffering can grow.
___________________________
"I'm much warier of EV calculations now than I was when I first entered the EA sphere, but I think they still hold merit if they're quality."
Pretty curious about this. What changed your mind, and what does a "quality" EV calculation look like?
_____________________________
"BUT. There is one rule, which is pretty closely covered in my last blog. If you want to maximize utility, or minimize negative utility, you must be admired for what you're doing."
I think there is a pretty easy counterexample here, where e.g. walking away from Omelas happens to be objectively good, yet every townsfolk shuns you for walking away and therefore no one admires your good behavior. Admiration is also very subjective, and it seems odd that you moral framework is otherwise fairly objective (e.g. caring about suffering, impartial to the thing experiencing it). Why suddenly introduce this partial "you should care about what specific people -- the admirers -- think"? Also, how do you determine whose admiration is the admiration you should pine for? This framing seems fraught with issues related to subjectivism.
______________________________________
"In my opinion, a sign of a good moral framework is that it's infinitely demanding"
I appreciate that you bite the bullet here; I feel similarly, though I don't think I have any philosophical reasons for thinking that demandingness is a sign that a framework is a good one (it's more of just an intuition that demanding theories seem more self-consistent). What is your reasoning here?
______________________________________
"I think if you live til ~2038, you've about guaranteed that you've achieved longevity escape velocity."
This (along with the accompanying section on AI timelines) is both terrifying and very exciting to think about. Let's hope we get there!
My year is probably a little later (say living to ~2060 for 90%+ chance of LEV), mainly because I think there's some non-negligible chance that a lot of AI concern is mistaken or confused. I was a little bit surprised by how confidently you made predictions about timelines and similar issues. How did you come to your conclusions?
____________________________________
Overall really awesome post! I enjoy your writing style; I'm also now more concerned about suffering focused ethics compared to my previous framework (which resembled classic utilitarianism). Looking forward to reading more this year :)
Well for one, you're definitely wrong about those feelings being exclusive to just humans. In factory farms, when calfs are taken away from their mothers just after birth, the mothers mourn for *months*. Other mammals definitely suffer from depression and anxiety, and certainly birds (see chickens kept in factory farms). Do insects suffer from depression and anxiety? Google says yes (https://www.sciencedaily.com/releases/2013/04/130418124858.htm) which I figured was true.
I don't think person to fly is one-to-one. But it's less than a 1,000 for sure. Probably less than 500. Probably less than 300. But it also depends on how much suffering the death of a person causes, and how much suffering is in their life, etc. (I don't really care about QALYs because I only care that they're not suffering, not that they're living good lives). Like it seems like insects live really horrible lives, so if you don't care about preferences, you should kill most insects as painlessly as possible. Especially because insects live such short lives, that if they're in a state of suffering, you should probably just kill them. But I haven't thought very much about this at all. CRS (Brian Tomasik) and Rethink Priorities have some thoughts on insect welfare that I haven't read yet.
___________________________________________
Something something, if you're a total hedonic utilitarian, utility calculus let's you justify all sorts of things, like buying a castle (or worse). The problem is that, if you *actually* run the calculus, you see that buying a castle is literally net bad. You harm the movement way too much, people lose trust, people stop being frugal, people start to care less about others and start to care only about ends. Hence "only do things in a way others admire". Because you probably maximize more utility doing this. Similarly, Carrick Flynn in Oregon, SBF spent *a lot* of money to try to win the election. Less money would've *certainly* worked in Flynn's favor. Literally the biggest problem with his campaign was that people *hated* all the ads they were seeing on TV, the really nice big ads they received in the mail, and that he had connections to crypto guy (who, turns out, was a fraud!!). The other biggest issues were that Flynn hadn't ever voted, he hadn't lived in Oregon in years, and he was a white male. It seemed obvious that Flynn was not going to win, but because the EV of having someone who cares about pandemic prevention was so high, we funneled millions anyhow. Tbf, I really like Carrick. He would've made a phenomenal representative. But he was not going to win.
There are ways in which EV can be bad when you're suffering-focused too, but that's why I have my rule that I hope to never break.
What does a quality EV calculation look like? Being serious about your chances. Weighting things properly. Spending money wisely. Not cutting corners. "Being admired". Being heroic.
_______________________________________
I keep hearing people talking about bullet biting: I think I missed this new EA lingo 😭😭. I think I explained my reasoning for the most part. It's ok for a framework to be infinitely demanding because the stakes are closer to infinity than 0 high (bare with me for the mathematical incongruence). You can also keep helping beings, and after you're done helping them, there are nearly infinite more to affect. That's not to say you should bare this burden alone, or that not doing good makes you a bad person, but that doing good is really important, especially if you're trying to eliminate suffering.
________________________________________
Longevity research is getting a lot of funding. Even if we don't have aligned AGI by 2038, I think we get additions by that point to let us live forever. If we don't get AGI, not all people who live to 2038 will get aided. I don't think we have post-scarcity without AGI, and if you're on your last years, longevity help is probably less effective. Is post-scarcity certain with slow takeoff? Maybe I extend my timing. But I'm a pretty big fast takeoff believer (but I don't actually have much belief why. Maybe great filter stuff I talk about below).
I place <1% odds that AI concern is mistaken or confused. I think it's negligible. Am I crazy for this? I don't know. I place higher odds on AI alignment being impossible. I probably became most convinced of this after reading works in grand futures by Sandberg, Drexler, Bostrom, Ord, and Armstrong. If we can deduce that certain things are physically possible to build, we will figure it out. I also think that AGI is probably the Great Filter. It fits too perfectly, I don't know what else it could be (other than simulations or zoo hypothesis, which also demands AGI). Something something Occam's Razor.
________________________________________
Yay on enjoying the writing style! Yay on being more suffering-focused! Boooooo on not being fully suffering-focused. My new goal is to make you completely suffering-focused before year's end.
"walking away from Omelas happens to be objectively good, yet every townsfolk shuns you for walking away and therefore no one admires your good behavior."
IIRC, the townsfolk don't shun you for walking away? Literally everyone walks away?
Also, I actually think you shouldn't walk away from Omelas. In theory I'm a total hedonic lexical utilitarian, but I think that the suffering of Omelas is lexically worse than it being spread among lots of people (hence the sufficientarianism). Unless I actually didn't recall correctly, and letting Omelas out makes it so everyone suffers just as much as Omelas does. Then yes, even if you're suffering-focused, the correct thing to do is to not let Omelas out. Or maybe you let Omelas out because there's some uncertain odds that you can work to improve the situation of everyone, whereas you can't while Omelas is locked in the closet. Idk, I actually don't like this story/allegory very much, I actually just read it for the first time the other day.
Yeah I agree that it's kind of a weird rule. But let me do some explaining.
Much of AIS work is *not* admired. Either because boo longtermism (justifies crazy stuff, I'm an anti-natalist, focus on problems of today), boo EA culture (diversity, uptightness, annoying, weird, sexism/assault), boo AIS work (should be doing ethics, or AIS is not a real thing). I obviously don't think we should stop AIS work (although I think it's rather concerning that most AIS workers are total hedonic utilitarians).
I think AIS would have a *much* better chance at actually making a difference if people thought of AIS researchers as heroes. And if AIS cared more about how others perceived them. But people do really terrible utility calculus here and think "well, I don't want to have to keep diversity in mind while I hire! my timelines are too short! of course I'd *like* to be able to keep diversity in mind, but we simply can't afford to". I think this is horrible. To be clear, I don't think diversity is a means to an end. I think it's just a necessity, and I think the utter lack of diversity in EA is disgusting and saddening and a blatant reflection on poor EA leadership and poor utility calculus mainly from the rationalist community (more later).
Ex 2: the mean boss that is hell bent on stuff getting done vs the really nice boss that is also competent. I'd overwhelmingly bet that really nice boss actually leads to more productivity, especially over the long run. If you have a really nice boss who's also incompetent, you're also not admired? At least if this leads to team dysfunction (and not just an easier job).
_________________________________________
Ok that's done. I forgot to say thanks for leaving this massive comment!! Thank you for taking the time to do that, I really appreciate you engaging with me.
I really like this post! It's nice to have your views spelled out so explicitly. I found myself agreeing with a lot of this and pretty confused about other bits. Some of the things that I most want answers/clarification for:
_______________________________
"I think atheism is the most important first step to take if you're trying to help the world. I'm an antitheist."
This seems like a very strong statement. Surely one could believe in some deistic god/the "god of philosophers" and this wouldn't affect their moral views very much! Also, if no god implies moral anti-realism, then it seems like becoming an atheist would NOT be an important step towards helping the world because "helping the world" is no longer a meaningful concept. Maybe I am misunderstanding the realism-anti realism debate though.
____________________________
"I have a whole paper written up on why I don't want to have children. If you're interested in reading it, message me!"
I have some intuitions against antinatalism, so I am very interested in reading this!
[comment got cut of; continued in the reply]
I'll probably write a whole post on this eventually sometime this year but. tldr; (or is it tl;dr??) I used to be mormon. I think we talked about this a little bit. When you're religious, your ONLY goal as a rational person is to follow your religious goals (get into heaven, achieve nirvana, etc). Your secondary goal, if you're trying to do good, is to convert as many people as possible to your religion (things kind of fall apart for Judaism, but Judaism still has many commandments). Most atheists actually probably do less good for the world? Many rationalize for "fun" or something adjacent. But if you want to be maximalist, it's really difficult to do so if you're religious. *Especially* if you want to be suffering-focused. Ex. Christians and Muslims have insane ideas of hell, which are taken to be completely OK. It's sort of like, I think it's really difficult to be suffering-focused if you're not vegan, or to actually care about animal suffering if you're not vegan. It's certainly possible, but there's something about actually changing your life here that's quite powerful.
For something like mormonism, which is easily falsifiable, I think it's incredibly difficult to be rational about much of anything. Some religions are less falsifiable.
I care about preventing the long-term suffering of beings in the universe. Most religions don't have moral circle expansion (veganism is laughed at, unless you're some sects of Buddhism or Hinduism). Most religions believe in some sort of Armageddon. 0 religions precipitate AI catastrophe. You might still be able to think of these things if you're religious (google Mormon Transhumanists) but only through weird loopholes.
I don't think these thoughts were all that coherent, but hopefully that helps. I hope to better explain this in a future post.
"Human capacity for suffering is higher because we're more complex individuals. After thinking about things for a while, I don't think this argument makes sense. [...] It seems like, if you squish an ant and it runs wildly in pain, that pain seems to be quite equivocal to human pain."
I think this misses the mark. Yes, there are some physical forms of suffering that are comparable across species, and in that sense ant pain = human pain. But having complex emotions and a social nature as a species leads to many unique and new forms of suffering (think "a mother grieving for her miscarried child" or "someone with severe depression"). Humans can experience intense emotional suffering along with intense physical suffering. I imagine that as brain structure gets more complex, there are more dimensions along which suffering can grow.
___________________________
"I'm much warier of EV calculations now than I was when I first entered the EA sphere, but I think they still hold merit if they're quality."
Pretty curious about this. What changed your mind, and what does a "quality" EV calculation look like?
_____________________________
"BUT. There is one rule, which is pretty closely covered in my last blog. If you want to maximize utility, or minimize negative utility, you must be admired for what you're doing."
I think there is a pretty easy counterexample here, where e.g. walking away from Omelas happens to be objectively good, yet every townsfolk shuns you for walking away and therefore no one admires your good behavior. Admiration is also very subjective, and it seems odd that you moral framework is otherwise fairly objective (e.g. caring about suffering, impartial to the thing experiencing it). Why suddenly introduce this partial "you should care about what specific people -- the admirers -- think"? Also, how do you determine whose admiration is the admiration you should pine for? This framing seems fraught with issues related to subjectivism.
______________________________________
"In my opinion, a sign of a good moral framework is that it's infinitely demanding"
I appreciate that you bite the bullet here; I feel similarly, though I don't think I have any philosophical reasons for thinking that demandingness is a sign that a framework is a good one (it's more of just an intuition that demanding theories seem more self-consistent). What is your reasoning here?
______________________________________
"I think if you live til ~2038, you've about guaranteed that you've achieved longevity escape velocity."
This (along with the accompanying section on AI timelines) is both terrifying and very exciting to think about. Let's hope we get there!
My year is probably a little later (say living to ~2060 for 90%+ chance of LEV), mainly because I think there's some non-negligible chance that a lot of AI concern is mistaken or confused. I was a little bit surprised by how confidently you made predictions about timelines and similar issues. How did you come to your conclusions?
____________________________________
Overall really awesome post! I enjoy your writing style; I'm also now more concerned about suffering focused ethics compared to my previous framework (which resembled classic utilitarianism). Looking forward to reading more this year :)
Well for one, you're definitely wrong about those feelings being exclusive to just humans. In factory farms, when calfs are taken away from their mothers just after birth, the mothers mourn for *months*. Other mammals definitely suffer from depression and anxiety, and certainly birds (see chickens kept in factory farms). Do insects suffer from depression and anxiety? Google says yes (https://www.sciencedaily.com/releases/2013/04/130418124858.htm) which I figured was true.
I don't think person to fly is one-to-one. But it's less than a 1,000 for sure. Probably less than 500. Probably less than 300. But it also depends on how much suffering the death of a person causes, and how much suffering is in their life, etc. (I don't really care about QALYs because I only care that they're not suffering, not that they're living good lives). Like it seems like insects live really horrible lives, so if you don't care about preferences, you should kill most insects as painlessly as possible. Especially because insects live such short lives, that if they're in a state of suffering, you should probably just kill them. But I haven't thought very much about this at all. CRS (Brian Tomasik) and Rethink Priorities have some thoughts on insect welfare that I haven't read yet.
___________________________________________
Something something, if you're a total hedonic utilitarian, utility calculus let's you justify all sorts of things, like buying a castle (or worse). The problem is that, if you *actually* run the calculus, you see that buying a castle is literally net bad. You harm the movement way too much, people lose trust, people stop being frugal, people start to care less about others and start to care only about ends. Hence "only do things in a way others admire". Because you probably maximize more utility doing this. Similarly, Carrick Flynn in Oregon, SBF spent *a lot* of money to try to win the election. Less money would've *certainly* worked in Flynn's favor. Literally the biggest problem with his campaign was that people *hated* all the ads they were seeing on TV, the really nice big ads they received in the mail, and that he had connections to crypto guy (who, turns out, was a fraud!!). The other biggest issues were that Flynn hadn't ever voted, he hadn't lived in Oregon in years, and he was a white male. It seemed obvious that Flynn was not going to win, but because the EV of having someone who cares about pandemic prevention was so high, we funneled millions anyhow. Tbf, I really like Carrick. He would've made a phenomenal representative. But he was not going to win.
There are ways in which EV can be bad when you're suffering-focused too, but that's why I have my rule that I hope to never break.
What does a quality EV calculation look like? Being serious about your chances. Weighting things properly. Spending money wisely. Not cutting corners. "Being admired". Being heroic.
_______________________________________
I keep hearing people talking about bullet biting: I think I missed this new EA lingo 😭😭. I think I explained my reasoning for the most part. It's ok for a framework to be infinitely demanding because the stakes are closer to infinity than 0 high (bare with me for the mathematical incongruence). You can also keep helping beings, and after you're done helping them, there are nearly infinite more to affect. That's not to say you should bare this burden alone, or that not doing good makes you a bad person, but that doing good is really important, especially if you're trying to eliminate suffering.
________________________________________
Longevity research is getting a lot of funding. Even if we don't have aligned AGI by 2038, I think we get additions by that point to let us live forever. If we don't get AGI, not all people who live to 2038 will get aided. I don't think we have post-scarcity without AGI, and if you're on your last years, longevity help is probably less effective. Is post-scarcity certain with slow takeoff? Maybe I extend my timing. But I'm a pretty big fast takeoff believer (but I don't actually have much belief why. Maybe great filter stuff I talk about below).
I place <1% odds that AI concern is mistaken or confused. I think it's negligible. Am I crazy for this? I don't know. I place higher odds on AI alignment being impossible. I probably became most convinced of this after reading works in grand futures by Sandberg, Drexler, Bostrom, Ord, and Armstrong. If we can deduce that certain things are physically possible to build, we will figure it out. I also think that AGI is probably the Great Filter. It fits too perfectly, I don't know what else it could be (other than simulations or zoo hypothesis, which also demands AGI). Something something Occam's Razor.
________________________________________
Yay on enjoying the writing style! Yay on being more suffering-focused! Boooooo on not being fully suffering-focused. My new goal is to make you completely suffering-focused before year's end.
Wait whoops I forgot one section
"walking away from Omelas happens to be objectively good, yet every townsfolk shuns you for walking away and therefore no one admires your good behavior."
IIRC, the townsfolk don't shun you for walking away? Literally everyone walks away?
Also, I actually think you shouldn't walk away from Omelas. In theory I'm a total hedonic lexical utilitarian, but I think that the suffering of Omelas is lexically worse than it being spread among lots of people (hence the sufficientarianism). Unless I actually didn't recall correctly, and letting Omelas out makes it so everyone suffers just as much as Omelas does. Then yes, even if you're suffering-focused, the correct thing to do is to not let Omelas out. Or maybe you let Omelas out because there's some uncertain odds that you can work to improve the situation of everyone, whereas you can't while Omelas is locked in the closet. Idk, I actually don't like this story/allegory very much, I actually just read it for the first time the other day.
Yeah I agree that it's kind of a weird rule. But let me do some explaining.
Much of AIS work is *not* admired. Either because boo longtermism (justifies crazy stuff, I'm an anti-natalist, focus on problems of today), boo EA culture (diversity, uptightness, annoying, weird, sexism/assault), boo AIS work (should be doing ethics, or AIS is not a real thing). I obviously don't think we should stop AIS work (although I think it's rather concerning that most AIS workers are total hedonic utilitarians).
I think AIS would have a *much* better chance at actually making a difference if people thought of AIS researchers as heroes. And if AIS cared more about how others perceived them. But people do really terrible utility calculus here and think "well, I don't want to have to keep diversity in mind while I hire! my timelines are too short! of course I'd *like* to be able to keep diversity in mind, but we simply can't afford to". I think this is horrible. To be clear, I don't think diversity is a means to an end. I think it's just a necessity, and I think the utter lack of diversity in EA is disgusting and saddening and a blatant reflection on poor EA leadership and poor utility calculus mainly from the rationalist community (more later).
Ex 2: the mean boss that is hell bent on stuff getting done vs the really nice boss that is also competent. I'd overwhelmingly bet that really nice boss actually leads to more productivity, especially over the long run. If you have a really nice boss who's also incompetent, you're also not admired? At least if this leads to team dysfunction (and not just an easier job).
_________________________________________
Ok that's done. I forgot to say thanks for leaving this massive comment!! Thank you for taking the time to do that, I really appreciate you engaging with me.