The article made me think deeper about what rubs me the wrong way about the whole movement
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
jandrese 18 hours ago [-]
They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
There is a term for this. "Getting stuck up your own butt." It wouldn't be so bad except that said people often take on an air of absolute superiority because they used "only logic" and in their head they can not be wrong. Many people end up thinking like this as teenagers or 20 somethings, but most will have someone in their life who smacks them over the head and tells them to stop being so foolish, but if you have enough money and the Internet you can insulate yourself from that kind of oversight.
troyastorino 17 hours ago [-]
The overlap between the Effective Altruism community and the Rationalist community is extremely high. They’re largely the same people. Effective Altruism gained a lot of early attention on LessWrong, and the pessimistic focus on AI existential risk largely stems from an EA desire to avoid “temporal-discounting” bias. The reasoning is something like: if you accept that future people count just as much as current people, and that the number of future people vastly outweighs everyone alive today (or who has ever lived), then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
benchly 41 seconds ago [-]
This tracks based on my limited contact with LessWrong during the whole Roko's Basilisk thing.
I quickly lost interest in Roko's Basilisk, but that is what brought me in the door and started me looking around the discussions. At first, it was quite seductive. There was a strange fearlessness there, a willingness to say and admit some things about humanity, our limitations and how we tend to think that other great thinkers maybe danced around in the past. After awhile it became clear that while there were a select few individuals who had found some balance between purely rational thinking and how reality actually works, most of the rest had their heads so far up their asses that they'd fart and call it a cool breeze. Reminded me of my brief obsession with Game Theory and realizing that even it's creators knew it's utility was not quite as advertised to the layman (as in it would not really help you predict or plan for anything at all, just model how decisions might be made).
dasil003 16 hours ago [-]
Intelligence and rational thought is useful, but like any strategy it has its tradeoffs and limitations. No amount of intelligence can overcome the chaos of long time horizons, especially when we're talking about human civilization. IMHO it's reasonable to pick a long-term problem/risk and focus on solving it. But it's pure hubris to think rationality will give you anything approaching high confidence of what the biggest problems and risks actually are on a 20-50 year time horizon, let alone 200-500 years or longer.
The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.
lisper 13 hours ago [-]
I think one can levy a much more specific critique of rationalism: rationalism is in some sense self-defeating. If you are rational you will necessarily conclude that the fundamental dynamic that drives the (interesting parts of) the universe is Darwinian evolution, which is not rational. It blindly selects for reproductive fitness at the expense of all else. If you are a gene, you can probably produce more offspring in an already-industrialized environment by making brains that lean more towards misogyny and sexual promiscuity than gender equality and intellectual achievement.
The real conflict here is between Darwinism and enlightenment ideals. But I have yet to see any self-styled Rationalists take this seriously.
GoblinSlayer 53 minutes ago [-]
Darwinism isn't a weakness of rationality, teleology has fine tuning problem, while darwinism is minimally fine tuned to work from scratch, which can be said to be optimal. Also darwinism doesn't select for reproductive fitness, it's only a proxy goal; true goal is survival, so you can produce more offspring only in a way compatible with true goal.
mettamage 13 hours ago [-]
I always liken this to that we’re all asteroids floating in space. There’s no free will and everything is determined. We just see the whole thing unfold from one conscious perspective.
Emotionally I don’t subscribe to this view. Rationally I do.
My critique for rational people is that they don’t seem to fully take experience into account. It’s assumptions + rationality + experience/data + whatever strong inclinations one has that seems to be the full picture for me.
Retric 12 hours ago [-]
> no free will
That always seemed like a meaningless argument to me. To an outside observer free will is indistinguishable from a random process over some range of possibilities. You aren’t going to randomly go to sleep with your hand in a fire, there’s some hard coded biology preventing that choice but that only means human behavior isn’t completely random, hardly a groundbreaking discovery.
At the other end we have no issues making an arbitrary decision where there’s no way to predict what the better choice is. So what exactly does free will bring to the table that we’re missing without it? Some sort of mystical soul, well what if that’s also deterministic? Unpredictability is useful in game theory, but computers can get that from a hardware RNG based on quantum processes like radioactive decay, so it doesn’t mean much.
Finally, subjectively the answer isn’t clear so what difference does it make?
vidarh 3 hours ago [-]
People get emotional about free will because if you come to believe there is no free will it makes you question a lot of things that are emotionally difficult.
E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.
Similarly, wealth disparities can't be excused by someone choosing to work harder, because they had no agency in the "decision".
You can still justify some degree of punishment and reward, but a lack of free will changes which justifications are reasonable very substantially.
E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals, and that has emotionally difficult consequences. For example, for non-premeditated murders carried out out of passion rather than e.g. gang crimes, the odds of someone carrying out another is extremely low and the odds that the fear of a long prison sentence is an actual deterrence is generally low, and and so long prison terms are hard to justify once vengeance is off the table.
And so holding on to a belief in free will is easier to a lot of people than the alternative.
My experience is that there are few issues where people so easily get angry than if you suggest we don't have free will once they start thinking through the consequences (and some imagined ones...).
aleph_minus_one 54 minutes ago [-]
I don't find the consequences very hard to bear:
For example
>
E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.
and
> E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals
are simply logical to me (even without assuming any lack of free will).
So what is emotionally difficult about this, as you claim?
WA 48 minutes ago [-]
If there is no free will, thoughts about free will are predetermined and so is punishment. The punishers don’t have agency either. You seem to say that punishers do have free will, but criminals don’t?
mettamage 11 hours ago [-]
> That always seemed like a meaningless argument to me.
Same as that is not the lived experience. I notice that I care about free choice.
The idea that there's no free will may be a pessimistic outlook to some but to me it's a strictly neutral one. It used to be a bit negative, until I looked more closely that there's a difference between looking at a situation objectively and having a lived experience. When it comes to my inclinations and how I want to live life, lived experience takes precedence.
I don't have my thoughts sharp on it, but I don't think the concept even exists philosophically, but I think that's also what you're getting at. It's a conceptual remnant from the past.
adastra22 4 hours ago [-]
"Free choice" is the first step towards the solution to this paradox: free will is what a deterministic choice feels like from the inside. The popular notion of free will is that our decisions are undetermined, which must imply that there is a random element to them.
But though that is the colloquial meaning, it doesn't line up with what people say they want: you want to make your choice according to your own reasons. You want free choice. But unless your own reasoning includes a literal throw of the dice, your justifications deterministically decide the outcome.
"Free will" is the ability to make your own choices, and for most people most of the time, those choices are deterministic given the options and knowledge available. Free will and determinism are not only compatible, but necessarily so. If your choices weren't deterministic, it wouldn't be free will.
vidarh 2 hours ago [-]
This is the position that is literally called compatibilist.
But when you probe people, while a lot of people will argue in ways that a philosopher might call compatibilist, my experience is that people will also strongly resist the notion that the only options are randomness and determinism. A lot of people have what boils down to a religious belief in a third category that is not merely a combination of those two, but infuses some mysterious third options where they "choose" that they can't explain.
Most of the time, people who believe there is no free will (and can't be), like me, take positions similar to what you described, that - again - a proponent of free will might describe as compatibilist, but sometimes we oppose the term for the reason above: A lot of people genuinely believe in a "third option" for choices are made.
And so there are really two separate debates on free will: Does the "third option" exist or not, and does "compatibilist free will" exist or not. I don't think I've ever met anyone who seriously disagrees that "free will" the way compatibilists define it exists, so when compatibilists get into arguments over this, it's almost always a misunderstanding...
But I have met plenty of people who disagree with the notion that things are deterministic "from the outside".
adastra22 2 hours ago [-]
It is stronger than compatibilism. Compatibilism argues that free will and determinism are orthogonal. The argument I summarized is that free will is and must necessarily imply determinism..
vidarh 1 hours ago [-]
I think that is a distinction without difference in as much as it's an excuse not to deal with it. But compatibilist "free will" must imply determinism unless some "magic" third alternative exists, because there isn't another option, and there is no evidence to suggest such a third alternative exists, so in practice every compatibilist I've had this discussion with have fallen back on arguing free will is compatible with determinism.
GoblinSlayer 59 minutes ago [-]
Definition doesn't tell all implications. In practice compatibilists do deeper analysis that reveals that determinism is required for free will.
GoblinSlayer 1 hours ago [-]
Opposing the term gives a wrong result too, as people jump to hard determinism.
sunshowers 5 hours ago [-]
I have much less patience for C++ than I would in a world with free will.
Since there's no free will, outcomes are determined by luck, and what matters is how lucky we can make people through pit-of-success environments. Rust makes people luckier than C++ does.
I also have much less patience for blame than I do in a world with free will. I believe, for example, that blameless postmortems lead to much better outcomes than trying to pretend people had free will to make mistakes, and therefore blaming them for those mistakes.
You can get to these positions through means other than rejection of free will, but the most robust grounds for them are fundamentally deterministic.
tsimionescu 4 hours ago [-]
If there is no free will, then all arguments about what should be done are irrelevant, since every outcome is either predetermined or random, so you have no influence on whether the project at work will choose Rust or C++. This choice was either made 13 billion years ago at the Big Bang, or it is an entirely random process.
Retric 4 hours ago [-]
Lack of free will doesn’t prevent logical arguments from seeming to work.
GoblinSlayer 3 hours ago [-]
Depends on whether you consider facts or theory. Facts don't prevent logical arguments from seeming to work, but lack of free will is theory. When theory doesn't match facts, theory is wrong.
sunshowers 4 hours ago [-]
This is a strawman argument extended by those who rely on supernatural explanations. In reality, people's utterances and actions are part of the environment that determines future actions, just like everything else.
tsimionescu 4 minutes ago [-]
Sure, but that still doesn't matter: the fact that I wrote my previous comment is what caused you to write your response, but it's not like I had a choice to write that comment or some other: the fact that I wrote that comment, as well as everything that led to me writing it (conversations with teachers, my parents letting me watch English cartoons so I learned English, etc), were predetermined the moment the Big Bang happened, or they're just a quantum fluctuation.
What I'm saying is that there's no logical point to the concept "should" unless you have some concept of free will: everything that happens must happen, or is entirely random.
paleotrope 7 hours ago [-]
Divorced from a religious context, it doesn't make any difference.
Retric 7 hours ago [-]
Which religious context, and why?
moron4hire 9 hours ago [-]
[dead]
lisper 12 hours ago [-]
If you get down to the quantum level there is no such thing as objective reality. Our perception that the world is made of classical objects that actually exist at particular places at particular times and have continuity of identity is an illusion. But it's a really compelling illusion, and you won't go far wrong treating it as if it were the truth in 99% of real-world situations. Likewise, free will is an illusion, nothing more than a reflection of our ignorance of how our brains work. But it is a really compelling illusion, and you won't go far wrong treating it as if it were the truth, at least some of the time.
mettamage 11 hours ago [-]
> If you get down to the quantum level there is no such thing as objective reality.
What do you mean by that? It still exists doesn't it? Albeit in a probabilistic sense that becomes non-probabilistic at larger scales.
I don't know much about quantum other than the high level conceptual stuff.
>Under QIT, a measurement is just the propagation of a mutually entangled state to a large number of particles.
eyeroll so it's MWI in disguise, but MWI is quantum realism. Illusion they talk about is that the observed macroscopic state is a part of the bigger superposition (incomplete observation). But that's dumb, even if it's a part of a bigger state, it's still real, because it's not made up, but observed.
stuartjohnson12 13 hours ago [-]
To the contrary, here's a series of essays on the subject of evolutionary game theory, the incentives created by competition, and its consequences for human wellbeing:
"Moloch hasn't won" is a lengthy critique of the argument you are making here.
lisper 12 hours ago [-]
That doesn't seem to be on point to me. I'm not talking about being "caught in bad equilibria". My assertion is that rationalism itself is not stable, that the (apparent) triumph of rationalism since the Enlightenment was a transient, not an equilibrium. And one of the reasons it was a transient is that self-styled rationalists believed (and apparently still believe) that rationalism will inevitably triumph because it is rational, because it is in more intimate contact with reality than religion and superstition. But this is wrong because it overlooks the fact that what triumphs in the long run is simply reproductive fitness. Being in contact with reality can be actively harmful to reproductive fitness if it leads you to, say, decide not to have kids because you are pessimistic about the future.
fc417fc802 8 hours ago [-]
> it overlooks the fact that what triumphs in the long run is simply reproductive fitness.
Why can't that observation be taken into account? Isn't the entire point of the approach accounting for all inputs to the extent possible?
I think you are making invalid assumptions about the motivations or goals or internal state or etc of the actors which you are then conflating with the approach itself. That there are certain conditions under which the approach is not an optimal strategy does not imply that it is never competitive under any.
The observation is then that rationalism requires certain prerequisites before it can reliably out compete other approaches. That seems reasonable enough when you consider that a fruit fly is unlikely to be able to successfully employ higher level reasoning as a survival strategy.
lisper 4 hours ago [-]
> Why can't that observation be taken into account?
Of course it can be. I'm saying that AFAICT it generally isn't.
> rationalism requires certain prerequisites before it can reliably out compete other approaches
Yes. And one of those, IMHO, is explicit recognition that rationalism does not triumph simply because it is rational, and coming up with strategies to compensate. But the rationalist community seems too hung up on things like malicious AI and Roko's basilisk to put much effort into that.
stuartjohnson12 53 minutes ago [-]
This argument proves too much. If rationalism can't "triumph" (presumably over other modes of thought) because evolution makes moral realism unobservable, then no epistemic framework will help you - does empirically observing the brutality of evolution lead to better results? Or perhaps we should hypothesise that it's brutal and then test that prediction against what we observe?
I'm sympathetic to the idea that we know nothing because of the reproductive impulse to avoid doing or thinking about things that led our ancestors to avoid procreation, but such a conclusion can't be total because otherwise it is self defeating because is is contingent on rationalist assumptions about the mind's capacity to model knowledge.
fc417fc802 30 minutes ago [-]
The point being made is that rationalism is a framework. Having a framework does not imply competent execution. At lower levels of competence other strategies win out. At higher levels of competence we expect rationalism to win out.
Even then that might not always be the case. Sometimes there are severe time or bandwidth or energy or other constraints that preclude carefully collecting data and thinking things through. In those cases a heuristic that is very obviously not derived from any sort of critical thought process might well be the winning strategy.
There will also be cases where the answer provided by the rational approach will be to conform to some other framework. For example where cult type ingroup dynamics are involved across a large portion of the population.
smallnamespace 9 hours ago [-]
> Being in contact with reality can be actively harmful to reproductive fitness if it leads you to, say, decide not to have kids because you are pessimistic about the future.
The fact that you can write this sentence, consider it to be true, and yet still hold in your head the idea that the future might be bad but it's still important to have children suggests that "contact with reality" is not a curse.
imtringued 2 hours ago [-]
You got Darwinism exactly backwards. Darwinism and nature do not select like an algorithm. There is no cost function in reality and no population selection and reproduction algorithm. What you're seeing is the illusion of selection due to selection bias.
If gender equality and intellectual achievement don't produce children, then that isn't "darwinism selecting rationality out". You can't expect the continued existence of finite lifespan organisms if there are no replacement organisms. Raising children is hard work. The people who believe in gender equality and intellectual achievement made the decision to not want more of themselves, particularly when their belief in gender equality entails not wanting male offspring. The alternative is essentially freeloading and expecting others, who do not share the beliefs, to produce children for you and also to teach them the "enlightened" belief of forcing "enlightened" beliefs onto others (note the circularity, the initial conditions are usually irrelevant and often just a fig leaf to perpetuate the status quo).
munksbeer 12 hours ago [-]
I hesitate to nitpick, but Darwinism (as far as I know) is not really the term to use because Darwin's theory was limited to life on earth. Only later was the concept generalised into "natural selection" or "survival of the fittest".
I'm not sure I entirely understand what you're arguing here, but I absolutely do agree that the most powerful force in the universe is natural selection.
adastra22 4 hours ago [-]
The modern understanding of Darwin's theory (even the original theory, not necessarily neo-Darwinian extensions of it) apply to the origins of life and non-biological systems as well. Darwin himself was largely concerned with biology and restricted his published writings to that topic, but even he saw the application to the origin of life, and implications for religion. Even if he hadn't, we generally still use the discoverer's name to a theory even when applied to a domain outside their original area of concern.
The term "survival of the fittest" predates Darwin's Origin of Species, and was adopted by Darwin within his lifetime, btw.
lisper 12 hours ago [-]
The term "Darwinian evolution" applies to any process that comprises iterated replication with random mutation followed selection for some quality metric. Darwin himself would not have defined it that way, but he still deserves the credit for being the first to recognize and document the power of this simple process.
throwaway2037 4 hours ago [-]
I used to think that massive, very long term droughts might cause serious instability, but I have since changed my mind. In highly developed nations, the amount of irrigation infrastructure built in the last 100 years is simple stunning. Plus, national agriculture research programmes are always researching how to use less water and grow the same amount of product. About drinking water: Rich places just build desalination plants. Sure, it is much more expensive than natural water sources (rivers, lakes, aquifers), but not expensive enough to cause political instability or serious economic harm. To be clear: Everything I wrote is from the perspective of highly developed nations. In middle income nations and below, droughts are incredibly challenging to overcome. The political and economic impacts can be enormous.
> Many farmers in the Texas High Plains, which rely particularly on groundwater, are now turning away from irrigated agriculture as pumping costs have risen and as they have become aware of the hazards of overpumping.
> Sixty years of intensive farming using huge center-pivot irrigators has emptied parts of the High Plains Aquifer.
> as the water consumption efficiency of the center-pivot irrigator improved over the years, farmers chose to plant more intensively, irrigate more land, and grow thirstier crops rather than reduce water consumption--an example of the Jevons Paradox in practice
How will the Great Plains farmers get water once the remaining groundwater is too expensive to extract?
Salt Lake City cannot simply build desalination plants to fix its water problem.
I expect the bad experiences of Okies during the internal migration of the Dust Bowl will be replicated once the temporary (albeit century-long) relief of using fossil water is exhausted.
AnthonyMouse 16 hours ago [-]
> Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.
The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
TeMPOraL 14 hours ago [-]
> Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Rationalist community understands that very well. They even know how to put bounds on the unknowns and their own lack of information.
> The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
Right. Good thing they'd agree with you 100% on this.
astrange 14 hours ago [-]
> They even know how to put bounds on the unknowns and their own lack of information.
No they don't. They think they can do this because they've accidentally reinvented the philosophy "logical positivism", which philosophers gave up on because it doesn't work. (This is similar to how they accidentally reinvented reconstructing arguments and called it "steelmanning".)
The nature of unknowns is that you don't know them.
What's the probability of AI singularity? It has never happened before so you have no priors and any number you assign will be pure speculation.
TeMPOraL 14 hours ago [-]
Same is true about anything you're trying to forecast, by definition of it being in the future. And yet people have figured out how to make predictions more narrow than shrugging.
jandrese 13 hours ago [-]
"It is difficult to make predictions, especially about the future."
Most of the time we make predictions based on how similar events happened in the past. For completely novel situations it's close to impossible to make a prediction and reckless to base policy on such a prediction.
ffsm8 13 hours ago [-]
That's strictly true, but I feel like you're misunderstanding something. Most people aren't actually doing anything truly novel, hence very few people ever actually have to even attempt to predict things in this way.
But it was necessary at the beginning of flight and the flight to the moon would've never been possible either without a few talented people being able to make predictions about scenarios they knew little about.
There are just way too many people around nowadays, which is why most of us never get confronted with such novel topics and consequently we don't know how to reason about it
senko 14 hours ago [-]
>> It has never happened before
> Same is true about anything you're trying to forecast, by definition of it being in the future
There might be some flaws in this line of reasoning...
freejazz 14 hours ago [-]
"And the general absolutist tone of the community. The people involved all seem very... Full of themselves ?"
>And yet people have figured out how to make predictions more narrow than shrugging
And?
bayarearefugee 12 hours ago [-]
That's only one flaw in the theory.
There are others, such as the unproven, narcissistic and frankly unlikely-to-be-true assumption that humanity continuing to exist is a net positive in the long run.
adastra22 4 hours ago [-]
"net positive" requires a human being existing to judge.
christophilus 10 hours ago [-]
A net positive for whom?
nradov 14 hours ago [-]
In what sense are people in those communities "quite smart"? Stupid is as stupid does. There are plenty of people who get good grades and score highly on standardized tests, but are in fact nothing but pontificating blowhards and useless wankers.
astrange 14 hours ago [-]
They're members of a religion which says that if you do math in your head the right way you'll be correct about everything, and so they think they're correct about everything.
They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.
Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.
jeffhwang 13 hours ago [-]
If only I could +1 this more than once! I have learned valuable things occasionally from people in the rationalist community but this overall lack of humility —and strangely blinkered view of humanities and important topics like say history of science relevant to “STEM”—ultimately turned me off to the movement as a whole. And I love science and math! It just shouldn’t belong to people with this (imo) childish model of people, IQ, etc.
imtringued 2 hours ago [-]
According to rationalists, humans don't work together, so you can't add up their individual intelligence to get more intelligence. Meanwhile building a single giant super AI is technologically feasible, so they weigh the intelligence of a single person vs all AIs operating as a collective hivemind.
georgeecollins 10 hours ago [-]
For anyone who takes Effective Altruism or Rationalism seriously I strongly recommend reading "You Are Not a Gadget" It was written more than ten years ago and was so prescient about the issues of social media and also contained one of the most devastating problems with EA, the idea of a circle of empathy.
You don't have to agree with any of this. I am not defending every idea the author has. But I recommend that book.
the-mitr 9 hours ago [-]
Thanks for the reco.
semi-extrinsic 4 hours ago [-]
> then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
This is the logic of someone who has failed to comprehend the core ideas of Calculus 101. You cannot use intuitive reasoning when it comes to infinite sums of numbers with extremely large uncertainties. All that results is making a fool out of yourself.
ineptech 10 hours ago [-]
What's the seductive-but-wrong part of EA? As far as I can tell, the vast majority of the opposition to it boils down to "Maybe you shouldn't donate money to pet shelters when people are dying of preventable diseases" vs "But donating money to pet shelters feels better!"
adastra22 4 hours ago [-]
That hasn't defined EA for at least 10 years or so. EA started with "you should donate to NGOs distributing mosquito nets instead of the local pet shelter".
It then moved into "you should work a soulless investment banking job so you can give more".
More recently it was "you should excise all expensive fun things from your life, and give 100% of your disposable income to a weird poly sex cult and/or their fraudulent paper hedge fund because they're smarter than you."
DowsingSpoon 7 hours ago [-]
It might be the parts that lead a person to commit large scale fraud with the idea that the good they can do with the stolen money outweighs all the negatives. Or, at least, that’s the popular idea of what happened to Sam Bankman-Fried. I have no idea what was actually going through that man’s mind.
In any case, EA smells strongly of “the ends justify the means” which most popular moral philosophies reject with strong arguments. One which resonates with me is that there are no “ends.” The path itself is the goal.
com2kid 4 hours ago [-]
> the ends justify the means” which most popular moral philosophies reject with strong arguments.
This is a false statement. Our entire modern world is built on the basis of the ends justify the means. Every time money is spent on long term infrastructure vs giving poor kids food right now, every time a war is fought, every time a doctor triages injuries at a disaster.
imtringued 2 hours ago [-]
I've noticed this with a lot of radical streamers on both sides. They don't care about principles, they care about "winning" by any means necessary.
Winning at things that align with your principle is a principle. If you don't care about principles, you don't care about what you're winning at, thereby making every victory hollow and meaningless. That is how you turn into a loser at everything you do.
tsimionescu 4 hours ago [-]
It's not all bad, some conclusions are ok. But it also has ideas like "don't donate to charity, it's better to invest that money in, like, an oil fund, and grow it 1000x, and then you can donate so much more! Bill Gates has done much more for humanity than some Red Cross doctor!". Which is basically just a way to make yourself feel good about becoming richer, much like "prosperity gospel" would for the religious.
comp_throw7 3 hours ago [-]
This is not a commonly-held position in EA.
Filligree 9 hours ago [-]
Yes, but they’re smug about it, which means they’re wrong.
Of course it sounds ridiculous when you spell it out this way.
skinnymuch 5 hours ago [-]
Or something like focusing on high paying job and donating money does absolutely nothing to solve any root or structural problems which defeats the supposed purpose of caring about future people or being effective altruist in the first place
Of course the way your comment is written makes criticism sound silly.
jdmichal 17 hours ago [-]
I'm not familiar with any of these communities. Is there also a general bias towards one side between "the most important thing gets the *most* resources" and "the most important thing gets *all* the resources"? Or, in other words, the most important thing is the only important thing?
IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)
nearbuy 16 hours ago [-]
It's not the case. Effective altruists give to dozens of different causes, such as malaria prevention, environmentalism, animal welfare, and (perhaps most controversially) extinction risk. It can't tell you which root values to care about. It just asks you to consider whether the charity is impactful.
Even if an individual person chooses to direct all their donations to a single cause, there's no way to get everyone to donate to a single cause (nor is EA attempting to). Money gets spread around because people have different values.
It absolutely does take some money away from other causes, but only in the sense that all charities do: if you give a lot to one charity, you may have less money to give to others.
JoshuaDavid 16 hours ago [-]
The general idea is that on the margin (in the economics sense), more resources should go to the most effective+neglected thing, and.the amount of resources I control is approximately zero in a global sense, so I personally should direct all of my personal giving to the highest impact thing.
skinnymuch 5 hours ago [-]
And in their logic the highest impact is to donate money, take high paying jobs regardless of morality, and not focusing on any structural or root issues.
dmurray 13 hours ago [-]
The other weird direction it leads is space travel.
If you assume we eventually figure out long distance space travel and humanity spreads across the galaxy, there could in the future be quadrillions of people, growing at some kind of exponential rate. So accelerating the space race by even an hour is equivalent to bringing billions of new souls into existence.
munksbeer 12 hours ago [-]
I don't see how bringing new souls (whatever those are) into existence should naturally qualify as a good thing?
Perhaps you're arguing as an illustration of the way this group of people think, in which case I understand your point.
dmurray 11 hours ago [-]
You can make arguments for it starting with "killing billions of people would definitely be bad" or "some fraction of those people will likely share my genes, which I have a biological drive to pass on".
It encodes a slight bias towards human existence being a positive thing for us humans, but I don't think it's the shakiest part of that reasoning.
adastra22 4 hours ago [-]
Nitpick: it bottoms out at quadratic growth in the limit, not exponential.
15 hours ago [-]
bena 13 hours ago [-]
Technically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.
I get the feeling these people often want to seem smarter than they are, regardless of how smart they are. And they want to get money to ostensibly "consider these issues", but really they want money for nothing.
If they wanted to do right by the future masses, they should be looking to the things that are affecting us right now. But they treat those issues as if they'll work out in the wash.
aspenmayer 12 hours ago [-]
> Technically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.
The current sums invested and donated in altruist causes are rounding errors themselves compared to GDPs of countries, so the revealed preferences of those investing and donating to altruist causes is to care about the future and the present also.
Are you saying that they should give a greater preference to help those who already exist rather than those who may exist in the future?
I see a lot of Peter Singer’s ideas in modern “effective” altruism, but I get the sense from your comment that you don’t think that they have good reasons for doing what they do, or that their reason leads them to support well-meaning but ineffective solutions. I am trying to understand your position without misrepresenting your point or goals. Are you naysaying or do you have an alternative?
I think it's essentially a grift. An excuse to do nothing while looking like you care and reaping rewards.
If they wanted to help, they should be focused on the now. Global poverty, climate change, despotic world leaders. They should be aligning themselves against such things.
But instead what we see is essentially not that. Effective altruism is a lot like the Democratic People's Republic of Korea, a bit of a misnomer.
imtringued 1 hours ago [-]
They have aligned themselves in favor of global poverty, climate change and despotic world leaders.
A lot of them argue that poor countries essentially don't matter. Climate change is not an extinction event and there should an authoritarian world government to prevent nuclear conflict to minimize the risk of nuclear extinction.
>In his dissertation On the Overwhelming Importance of Shaping the Far Future (2013), supposedly “one of the best texts on existential risks,”[9] Nicholas Beckstead meditates on the “ripple effects” a human life might have for future generations and concludes “that saving a life in a rich country is substantially more important than saving a life in a poor country” due to the higher level of innovation and economic productivity attained in these countries.[10]
To be pedantic, DPRK is run via the will of the people to a degree comparable to any country. A bigger misnomer is the west calling liberal “democracy”, just democracy.
im3w1l 15 hours ago [-]
I think most everyone can agree with this: Being 100% rigorous and rational, reasoning from first principles and completely discarding received wisdom is a great trait in a philosopher but a terrible trait in a policymaker. Because for the former, exploring ideas for the benefit of future generations is more important than whether they ultimately reach the right conclusion or not.
wagwang 3 hours ago [-]
The big problem is that human values are similar to neural network weights that can't be clearly defined into true/false axioms like "human life is inherently valuable" and "murder is wrong". The easiest axiom to break is something like "every human life is equally valuable", or even "every human life is born equal".
Johanx64 13 hours ago [-]
> Being 100% rigorous and rational, reasoning from first principles
It really annoys me when people say that those religious cultists do that.
They derive their bullshit from faulty, poorly thought out premises.
If you fuck up the very firsts calculations of the algorithm, it doesn't matter how rigorous all the subsequent steps are. There results are going to be all wrong.
PaulHoule 9 hours ago [-]
EA always rubbed me the wrong way.
(1) The kind of Gatesian solutions they like to fund like mosquito nets are part of the problem, not part of the solution as I see it. If things are going to get better in Africa, it will be because Africans grow their economy and pay taxes and their governments can provide the services that they want. Expecting NGOs to do everything for them is the same kind of neoliberal thinking that has rotted state capacity in the core and set us up for a political crisis.
(2) It is one thing to do something wrong, realize it was a mistake, and then make amends. It's another thing to do plan to do something wrong and to try to offset it somehow. Many of the high paying jobs that EA wants young people to enter are "part of the problem" when it comes to declining stage capacity, legitimation crisis, and not dealing with immediate problems -- like the fact that one of these days there's going to be a heat wave that is a mass causality event.
Furthermore
(3) Time discounting is a central part of economic planning
It is controversial as hell, but one of the many things the Soviet Union got wrong before the 1980s was planning with a discount rate of zero, which led to many economically and ecologically harmful projects. If you seriously think it should be zero you should also be considering whether anybody should work in the finance industry at all or if we should have dropped a hydrogen bomb on Exxon's headquarters yesterday. At some point speculations about the future are just speculation. When it comes to the nuclear waste issue, for instance, I don't think we have any idea what state people are going to be in 20,000 years. They might be really pissed that buried spent nuclear fuel some place they can't get at it. Even the plan to burn plutonium completely in fast breeder reactors has an air of unreality about it, even though it happens on a relatively short 1000 year timescale we can't be sure at all that anyone will be around to finish the job.
(4) If you are looking for low-probability events to worry about I think you could find a lot of them. If it was really a movement of free thinkers they'd be concerned about 4,000 horsemen of the apocalypse, not the 4 or so that they are allowed to talk about -- but talk about a bunch of people who'll cancel you if you "think different". Somehow climate change and legitimation crisis just get... ignored.
(5) Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion, not least "The Singularity" was talked about by Jesuit Priest Teilhard de Chardin long before sci-fi writer Vernor Vinge used it as the hinge of a mystery novel.
gsf_emergency_2 9 hours ago [-]
(3) "Controversial" is a weasel word AFAIC :)
The difficulty is in deriving any useful utility function from prices (even via preferences :), and as you know, econs can't rid themselves of that particular intrusive thought
What makes it really bad is that you can't add different people's utility functions, or for that matter, multiply the utility function of some imagined galactic citizen by some astronomical amount. The question of "what distribution of wealth maximizes welfare" [1] is unanswerable in that framework and we're left with the Randian maxim that any transaction freely entered into is "fair" because nobody would enter into it if it didn't increase their utility function.
[1] Though you might come to the conclusion that greeder people should have the money because they like it more
gsf_emergency_2 8 hours ago [-]
Comic sophism ain't gonna make hypernormies reconsider lol
(Aside from the semi-tragic one to consider additive dilogarithms..)
One actionable (utility agnostic) suggestion: study the measureable consequences of (quantifiable) policy on carbon pricing, because this is already quite close to the uncontroversial bits
bell-cot 8 hours ago [-]
> When it comes to the nuclear waste issue, for instance ...
Nuclear waste issues are 99.9% present-day political/ideological. Huge portions of the Earth are uninhabitable due to climate and/or geology. Lead, mercury, arsenic, and other naturally-occurring poisons contaminate large areas. Volcanoes spew CO2 and toxic gasses by the megaton.
Vs. when is the last time you heard someone get excited over toxic waste left behind by the Roman Empire?
Johanx64 13 hours ago [-]
>People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
They don't even do this.
If you're reasoning in purely logical and deductive way - it's blatantly obvious that living beings experience way more pain and suffering, than pleasure and joy. If you do the math, humanity getting wiped out in effect is the best thing that could happen.
Which is why accelerationism ignoring all the AGI risks is correct strategy presuming the AGI will either wipe us out (good outcome) or provide technologies that improve the human condition and reduce suffering (good outcome).
Logical and deductive reasoning based on completely baseless and obviously incorrect premises is flat out idiotic.
You can't deprive non-existent people out of anything.
And if you do, I hope you're ready for purely logical, deductive follow up - every droplet of sperm is sacred and should be used to impregnate.
cassepipe 16 hours ago [-]
I read the whole tree of responses under this comment and I could only convince myself that when people have no arguments they try to make you look bad.
Most of criticisms are just "But they think they are better than us !" and the rest is "But sometimes they are wrong !"
I don't know about the community and couldn't care less but their writings have brought me some almost life saving fresh air in how to think about the world. It is very sad to me to read so many falsely elaborate responses from supposedly intelligent people having their ego hurt but in the end it reminds me why I like rationalists and I don't like most people.
gopher_space 10 hours ago [-]
Ever had a conversation with someone who literally cannot reexamine their base principles? Like, it's functionally impossible for them? That's everyone's central criticism of rationalists.
Being able to do that is pretty much "entry level cognition" for a lot of us. You should be doing that yourself and doing it all the time if you want to play with the big kids.
One of the things I really miss about the old nerds-only programmer's pit setup was the amount of room we had for instruction, especially regarding social issues. The scenes from the college department in Wargames were really on the nose, but highlight a form of education that was unavoidable if you couldn't just dip out of a conversation.
comp_throw7 3 hours ago [-]
What a deeply confused criticism of rationalists, who are possibly the only meaningful social group in existence to celebrate changing their minds in response to new evidence.
smus 16 hours ago [-]
Feels like "they are wrong and smug" is enough reason to dislike the movement
TeMPOraL 14 hours ago [-]
Bashir: Even when they're neither?
Garak: Especially when they're neither.
smus 14 hours ago [-]
The comment I replied to conceded wrongness and smugness but is still somehow confused why otherwise intelligent ppl dislike the movement. I was hoping to clear it up for them
Extra points for that comment's author implying that people who don't like the wrong and smug movement are unintelligent and protecting their egos, thus personally proving its smugness
cassepipe 12 hours ago [-]
I only conceded it insofar as everyone can be wrong sometimes, but at least those people seem to have a protocol to deal with it.Most people don't, and are fine with it. I stand on the side of those who care and are trying.
As for smugness, it is subjective. Are those people smug ? Or are they talking passionately about some issue with the confidence of someone who feel what they are talking about and are expecting for it to resonate ? It's the eye of the beholder I guess.
For example what you call my smugness is what I would a slightly depressed attitude fueled by the fact that it's sometimes hard to relate to other people feelings and behavior.
16 hours ago [-]
ajkjk 16 hours ago [-]
Here's a theory of what's happening, both with you here in this comment section and with the rationalists in general.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
johhnylately535 15 hours ago [-]
I finally gave in and created an account because of your comment. It's beautifully put. I would only perhaps add that, to me, the neo-rationalist thing looks the most similar to things that don't work yet attract hardcore "true believers". It's a pattern repeated through the ages, perhaps most intimately for me in the seemingly endless parades of computer system redesigns: software, hardware, or both. Randomly one might pick "the new and exciting Digg", the Itanium, and the Metaverse as fairly modern examples.
There is something about a particular "narrowband" signaling approach, where a certain kind of purity is sought, with an expectation that, given enough explaining, you will finally get it, become enlightened, and convert to the ranks. A more "wideband" approach would at least admit observations like yours do exist and must be comprehensively addressed to the satisfaction of those who hold such beliefs vs to the satisfaction of those merely "stooping" to address them (again in the hopes they'll just see the light so everyone can get back to narrowband-ville).
ajkjk 15 hours ago [-]
(Thank you) I agree, although I do think that the rationalists and EAs are way better than most of the other narrowband groups, as you call them, out there, such as the Metaverse or Crypto people. The rationalists are at least mostly legitimately motivated by morality and not just by a "blow it all up and replace it with something we control" philosophy (which I have come to believe is the belief-set that only a person who is convinced that they are truly powerless comes to). I see the rationalists as failing due to a skill issue as well: because they have so-defined themselves by their rationalism, they have trouble understanding the things in the world that they don't have a good rational understanding of, such as morality. They are too invested in words and truth and correctness to understand that there can be a lot of emotional truth encoded in logical falsehood.
edit: oh, also, I think that a good part of people's aversion to the rationalists is just a reaction to the narrowband quality itself, not to the content. People are well-aware of the sorts of things that narrowband self-justifying philosophies lead to, from countless examples, whether it's at the personal level (an unaccountable schoolteacher) or societal (a genocidal movement). We don't trust a group unless they specifically demonstrate non-narrowbandedness, which means being collectively willing to change their behavior in ways that don't make sense to them. Any movement that co-opts the idea of what is morally justifiable---who says that e.g. rationality is what produces truth and things that run counter to it do not---is inherently frightening.
skinnymuch 5 hours ago [-]
They aren’t motivated by morality. They just are more moral relative to the niches you referred to.
Any group that focuses on their own goals of high paying jobs regardless of the morality of those jobs or how they contribute to the structural issues of society is not that good. Then donating money while otherwise being okay with the status quo —- not touching anything systemic in such an unjust world but supposedly focusing on morality is laughable.
Terr_ 10 hours ago [-]
> Humans are generally better at perceiving threats than they are at putting those threats into words.
Not only that, but this is exactly the kind of scenario where we should be giving those signals the most weight: The individual estimating whether to join up with a tribe. (As opposed to, say, bad feelings about doing calculus.)
Not only does it involve humans-predicting-humans (where we have a rather privileged set of tools) but there have been millions of years of selective pressure to be decent at it.
twoodfin 15 hours ago [-]
What creeps me out is that I have no idea of their theory of power: How will they achieve their aims?
Maybe they want to do it in a way I’d consider just: By exercising their rights as individuals in their personal domains and effectively airing their arguments in the public sphere to win elections.
But my intuition is they think democracy and personal rights of the non-elect are part of the problem to rationalize around and over.
Would genuinely love to read some Rationalist discourse on this question.
janeerie 13 hours ago [-]
One mistake you making is thinking that rationalists care more about people far away than people in their community. The reality is that they set the value of life the same for all.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.
ajkjk 13 hours ago [-]
That's not a mistake I'm making. Assuming you're talking about bog-standard effective altruists---by (claiming to) value the suffering of people far away as the same as those nearby, they're discounting the people around them heavily compared to other people. Compare to anyone else who values their friends and family and community far more than those far away. Perhaps they're not discounting them to less-than-parity---just less than they are for most people.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
com2kid 3 hours ago [-]
> they're discounting the people around them heavily compared to other people
This statement of yours makes no sense.
EAs by definition are attempting to remove the innate bias that discounts people far away by instead saying all lives are of equal worth.
>turning them into a math problem that assigns no special responsibility to the people around you
All lives are equal isn't a math problem. "Fuck it blow up the foreigners to keep oil prices low" is a math problem, it is a calculus that the US government has spent decades performing. (One that assigns zero value to lives outside the US.)
If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
And since air travel is a thing, what the hell does "close to us" mean?
For that matter, from a purely selfish POV, helping lift other nations up to become fully advanced economies is hugely beneficial to me, and everyone on earth, in the long run. I'm damn thankful for all the aid my country gave to South Korea, the number of scientific advances that have come out of SK damn well paid for any tax dollars my grandparents paid on many orders of magnitude times over.
> It can be hard to say exactly why in words, but that doesn't make it less true.
This is the part where I shout racism.
Because history has shown it isn't about people being far or close in distance, but rather in how those people look.
Americans have shot down multiple social benefit programs because, and these are what people who voted against those programs directly said was their reasons "white people don't want black people getting the same help white people get."
Whites in America have voted, repeatedly, to keep themselves poor rather than lift themselves and black families out of poverty at the same time.
Of course Americans think helping people in Africa is "weird".
biomcgary 11 hours ago [-]
To me, the non-local focus of EA/rationalism is, at least partially, a consequence of their historically unusual epistemology.
In college, I became a scale-dependent realist, which is to say, that I'm most confident of theories / knowledge in the 1-meter, 1-day, 1 m/s scales and increasingly skeptical of our understanding of things that are bigger/smaller, have longer/short timeframes, or faster velocities. Maybe there is a technical name for my position? But, it is mostly a skepticism about nearly unlimited extrapolation using brains that evolved under selection for reproduction at a certain scale. My position is not that we can't compute at different scales, but that we can't understand at other scales.
In practice, the rationalists appear to invert their confidence, with more confidence in quarks and light-years than daily experience.
Terr_ 8 hours ago [-]
> no special responsibility to the people around you
Musing on the different failure-directions: Pretty much any terrible present thing against people can be rationalized by arguing that one gadzillion distant/future people are more important. That includes religious versions, where the stakes of the holy war may presented as all of future humanity being doomed to infinite torment. There are even some cults that pitch it retroactively: Offer to the priesthood to save all your ancestors who are in hell because of original sin.
The opposite would be to prioritize the near and immediate, culminating in a despotic god-king. This is somewhat more-familiar, we may have more cultural experience and moral tools for detection and prevention.
A check on either process would be that the denigrated real/nearby humans revolt. :p
cycomanic 14 hours ago [-]
> Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include personality morality, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety.
I head not read any rationalist writing in a long time (and I didn't know about Scott's proximity), but the whole time I was reqding the article I was thinking the same thing you just wrote... "why are they afraid of AI, i.e. the ultimate rationalist taking over the world", maybe something deep inside of them has the same reaction to their own theories as you so eloquently put above.
jay_kyburz 13 hours ago [-]
I don't read these rationalist essays either, but you don't need to be a deep thinker to understand why any rational person would be afraid of AI and the singularity.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!
wat10000 12 hours ago [-]
Those corporations are already superhuman entities with morals that don’t match ours. They do cause a lot of problems. Maybe it’s better to figure out how to fix that real, current problem rather than hypothetical future ones.
Twey 9 hours ago [-]
This parallel has been drawn. Charlie Stross [0] in particular thinks the main difference is that pre-digital AIs are capable of behaving much faster, so that other entities (countries, lawmakers, …) have time to react to them.
Similar claims can be made about any structure of humans that exhibits gestalt intelligence, e.g. nations, stock markets, etc.
wat10000 8 hours ago [-]
It's quite likely I got it from that!
jay_kyburz 11 hours ago [-]
Yeah, the AI will be a tool of a corp. So in effect we are talking about limiting corporate power.
jandrese 14 hours ago [-]
No offense, but this way of thinking is the domain of comic book supervillains. "I must destroy the world in order to save it." Morality is only holding me back from maximizing the value of the human race 1,000 or 1,000,000 years from now type nonsense.
This sort of reasoning sounds great from 1000 feet up, but the longer you do it the closer you get to "I need to kill nearly all current humans to eliminate genetic diseases and control global warming and institute an absolute global rationalist dictatorship to prevent wars or humanity is doomed over the long run".
Or you get people who are working in a near panic to bring about godlike AI because they think that once the AI singularity happens the new AI God will look back in history and kill anybody who didn't work their hardest to bring it into existence because they assume an infinite mind will contain infinite cruelty.
ta988 13 hours ago [-]
but it will also contain infinite love.
retRen87 10 hours ago [-]
Oh this is a great breakdown, I totally agree. I feel there’s a morality hierarchy that’s akin to Maslow’s hierarchy of needs.
People benefit from a sense of a family, a sense of community. It helps us feel more secure, both personally and to our loved ones.
I think the more I view things through this lenses the downstream benefits I see.
flufluflufluffy 15 hours ago [-]
Holy heck this is so well put and does the exact thing where it puts into words how I feel which is hard for me to do myself.
Twey 14 hours ago [-]
> The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions."
I really like the depth of analysis in your comment, but I think there's one important element missing, which is that this is not an individual decision but a group heuristic to which individuals are then sensitized. Individuals don't typically go so far as to (consciously or unconsciously) extrapolate others' logic forward to decide that it's dangerous. Instead, people get creeped out when other people don't adhere to social patterns and principles that are normalized as safe in their culture, because the consequences are unknown and therefore potentially dangerous; or when they do adhere to patterns that are culturally believed to be dangerous. This can be used successfully to identify things that are really dangerous, but also has a high false positive rate (people with disabilities, gender identities, or physical characteristics that are not common or accepted within the beholder's culture can all trigger this, despite not posing any immediate/inherent threat) as well as a high false negative rate (many serial killers are noted to have been very charismatic, because they put effort into studying how to behave to not trigger this instinct). When we speak of something being normalized, we're talking about it becoming accepted by the mainstream so that it no longer triggers the ‘creepy’ response in the majority of individuals. As far as I can tell, the social conservative basically believes that the set of normalized things has been carefully evolved over many generations, and therefore should be maintained (or at least modified only very cautiously) even if we don't understand why they are as they are, while the social liberal believes that we the current generation are capable of making informed judgements about which things are and aren't harmful to a degree that we can (and therefore should) continuously iterate on that set to approach an ideal goal state in which it contains only things that are factually known to be harmful.
As an interesting aside, the ‘creepy’ emotion, (at least IIRC in women) is triggered not by obviously dangerous situations but by ambiguously dangerous situations, i.e. ones that don't obviously match the pattern of known safe or unsafe situations.
> Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors".
The problem with the ‘us before them’ approach is that if two neighbourhoods each prioritize their local neighbourhood over the remote neighbourhood and compete (or go to war) to better their own neighbourhood at the cost of the other, generally both neighbourhoods are left worse off than they started, at least in the short term: both groups trying to make locally optimal choices leads (without further constraints) to globally highly suboptimal outcomes. In recognition of this a lot of people, not just capital-R Rationalists, now believe that at least in the abstract we should really be trying to optimize for global outcomes.
Whether anybody realistically has the computational ability to do so effectively is a different question, of course. Certainly I personally think the future-discounting ‘bias’ is a heuristic used to acknowledge the inherent uncertainty of any future outcome we might be trying to assign moral weight to, and should be accorded some respect. Perhaps you can make the same argument for the locality bias, but I guess that Rationalists (generally) either believe that you can't, or at least have a moral duty to optimize for the largest scope your computational power allows.
ajkjk 14 hours ago [-]
yeah, my model of the "us before them" question is that it is almost always globally optimal to cooperate, once a certain level of economic productivity is present. The safety that people are worried about is guaranteed not by maximizing their wealth but by minimizing their chances of death/starvation/conquest. Up to a point this means being strong and subjugating your neighbor (cf most of antiquity?) but eventually it means collaborating with them and including them in your "tribe" and extending your protection to them. I have no respect for anyone who argues to undo this, which is I think basically the ethos of the trump movement: by convincing everyone that they are under threat, they get people to turn on those that are actually working in concert with them (in order to enrich/empower themselves). It is a schema problem: we are so very very far away from an us vs. them world that it requires delusions to believe.
(...that said, progressivism has largely failed in dispelling this delusion. It is far too easy to feel as though progressivism/liberalism exists to prop up power hierarchies and economic disparities because in many ways it does, or has been co-opted to do that. I think on net it does not, but it should be much more cut-and-dry than it is. For that to be the case progressivism would need to find a way to effectively turn on its parasites, that is, rent-extracting capitalism and status-extracting moral elitism).
re: the first part of your reply. I sorta agree but I do think people do more extrapolation than you're saying on their own. The extrapolation is largely based on pattern-matching to known things: we have a rich literature (in the news, in art, in personal experience and storytelling) of failure modes of societies, which includes all kinds of examples of people inventing new moral rationalizations for things and using them to disregard personal morality. I think when people are extrapolating rationalists' ideas to find things that creep them out, they're largely pattern-matching to arguments they've seen in other places. It's not just that they're unknowns. And those arguments are, well, real arguments that require addressing.
And yeah, there are plenty of examples of people being afraid of things that today we think they should not have been afraid of. I tend to think that that's just how things go: it is the arc of social progress to figure out how to change things from unknown+frightening to known+benign. I won't fault anyone for being afraid of something they don't understand, but I will fault them for not being open-minded about it or being unempathetic or being cruel or not giving people chances to prove themselves.
All of this is rendered much more opaque and confusing by the fact that everyone places way too much stock in words, though. (e.g. the OP I was replying to who was taking all these criticisms of the rationalists at face-value). IMO this is a major trend that fucks royally with our ability as a society to make moral progress: we have come to believe that words supplant emotional intuition in a way that wrecks out ability to actually understand what people are upset about (I like to blame this trend for much of the modern political polarization). A small example of this is a case that I think everyone has experienced, which is a person discounting their own sense of creepiness from somebody else because they can't come up with a good reason to explain it and it feels unfair to treat someone coldly on a hunch. That should never have been possible: everyone should be trusting their hunches.
(which may seem to conflict with my preceding paragraph... should you trust your hunches or give people the chance to prove themselves? well, it's complicated, but it also really depends on what the result is. Avoiding someone personally because they creep you out is always fine, but banning their way of life when it doesn't affect you at all or directly harm anyone is certainly not.)
roxolotl 8 hours ago [-]
This is a remarkably well started comment.
One thing I'd like to add though is that I do think there is an additional piece being discarded irrationally. They tend to highly undervalue everything you're describing. Humans aren't Vulcans. By being so obsessed with the risks of paperclip-maximizing-robots they devalue the risks of humans being the irrational animals they are.
This is why many on the left criticize them for being right wing. Not because they are, well some might be, but because they are incredibly easy to distract from what is being communicated by focusing too much on what is being said. That might be a bad phrasing but what I mean is that when you look at this piece from last year about prison sentence length and crime rates by Scott Alexander[0] nothing he says is genuinely unreasonable. He's generally evaluating the data fairly and rationally. Some might disagree there but that's not my point. My point is that he's talking to a nonexistant group. The right largely believes that punishment is the point of prison. They might _say_ the goal is to reduce crime, but they are communicating based on a set of beliefs that strongly favors punitive measures for their own sake. This causes a piece like that to miss the forest through the trees and can be seen by those on the left as functionally right wing propaganda.
Most people are not rational. Maybe some day they will be but until then it is dangerous to assume and act as if they are. This makes me see the rationalists as actually rather irrational.
Read other comments then. Not everyone is saying “they think they are better than us”.
The primary issues as others have noted is they focus on people going to the highest paying jobs without much care for morality of the jobs. Ergo they are fine being net negatives in terms of their work and philosophy.
All they do is donate money. Donations don’t fix society. Nothing changes structurally. No root problems are looked at.
They largely ignore capitalism’s faults or when I’ve seen them talk about, it’s done in a way of superficially decrying capitalist issues but then largely going along with them. Which ties into how they focus on high paying jobs regardless of morality (I’m exaggerating here but the overall point is correct).
—
HN is not intelligent when it comes to politics or the world. The avg person here is a western chauvinist with little political knowledge but a defensive ego about it. No need to be sad about this comment page.
trod1234 16 hours ago [-]
[flagged]
noname120 16 hours ago [-]
> They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.
I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.
For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.
They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.
notahacker 14 hours ago [-]
The confluence of Bay Area rationalism and academic philosophy means a lot of other EA space is given to discussing hypotheticals in longwinded forum posts, blogs and papers. Some of those are well-trod utilitarian debates, others take it towards uniquely EA arguments like asserting that given that there could be as many as 10^31 future humans, essentially anything which claims to reduce existential risk - no matter how implausible the mechanism - has higher expected value than doing stuff that would certainly save human lives. An apparently completely unironic forum argument asked their fellow EAs to consider the possibility that given various heroic assumptions, the sum total of the suffering caused to mosquitos by anti-malaria nets might in fact be larger than the suffering caused by malaria they prevent. Obviously not a view shared by EAs who donate to antimalaria charities, but absolutely characteristic of the sort of knots EAs like to tie themselves in - it even has its own jokey jargon ('the rebugnant conclusion' and 'taking the train to crazy town') to describe adjacent arguments and the impulse to pursue them.
The newsletter is of course far more to the point than that, but even then you'll notice half of it is devoted to understanding the emotional state and intentions of LLMs...
It is of course entirely possible to identify as an "Effective Altruist" whilst making above-average donations to charities with rigorous efficacy metrics and otherwise being completely normal, but that's not the centre of EA debate or culture....
noname120 16 minutes ago [-]
> that's not the centre of EA debate or culture....
EAs gave $1,886,513,058 through GiveWell[1], and there is 0 AI stuff in there (you can search in the linked Airtable spreadsheet).
There is also a whole movement for doing a lifetime commitment to give 10% of your earnings to charity. 9,880 people took the pledge so far[2].
As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".
LargeWu 14 hours ago [-]
It's also not a very big leap from "My purpose is to do whatever is the greatest good" to "It doesn't matter if I hurt people as long as the overall net result is good (by some arbitrary standard)"
ChadNauseam 11 hours ago [-]
99% of effective altruists and rationalists agree that you shouldn't hurt people as part of some complicated scheme to do good. For example, here is eliezer yudkowsky in 2008 saying exactly that, and explaining why it's true: https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t...
LargeWu 9 hours ago [-]
I believe they believe that, on its face.
I also believe that idealistic people will go to great lengths to convince themselves that their desired outcome is, in fact, the moral one. It starts by saying things like, "Well, what is harm, actually..." and then constructing a definition that supports the conclusions they've already arrived at.
I'm quite sure Sam Bankman-Fried did not believe he was harming anybody when he lost/stole/defrauded his investors and depositors' money.
Propelloni 3 hours ago [-]
Like an very old dude once said: "No one is willfully evil."
m-ee 10 hours ago [-]
This isn’t a hypothetical leap either. This thinking directly lead to the murders committed by the zizians.
oh_my_goodness 13 hours ago [-]
I think this is the key comment so far.
Veedrac 13 hours ago [-]
It seems very odd to criticize the group that most reliably and effectively funds global health and malaria prevention for this.
What is your alternative? What's your framework that makes you contribute to malaria prevention more or more effectively than EAs do? Or is the claim instead that people should shut down conversation within EA that strays from the EA mode?
jhbadger 13 hours ago [-]
The simple answer is you don't need a "framework" -- plain empathy for the less fortunate is good enough. But if the EA's actually want to do something about malaria (although the Gates Foundation does much, much more in that regard than the Centre for Effective Altruism), more power to them. But as Becker notes from his visits to the Centre, things like malaria and malnutrition are not the primary focus of the centre.
noname120 24 minutes ago [-]
EA people gave a total of $817,276,989 to malaria initiatives through GiveWell[1][2].
How much more do they need to give before you will change your mind about whether “EA's actually want to do something about malaria”?
I wish it were, but it's clearly not enough. There are plenty of people with healthy emotional empathy in the world, and yet children still die of easily preventable diseases.
I am plenty happy to simp for the Gates foundation, but I think it's important to acknowledge that becoming Bill Gates to support charity is not a strategy the average person can replicate. The question for me is how do I live my life to support the causes I care about, not who lives more impressive lives than me.
achierius 11 hours ago [-]
I think the group that most reliably and effectively funds global health -- at least in terms of total $ -- would be the United Nations, or perhaps the Catholic Church, or otherwise one national government or another.
If you exclude "nations" then it does look to be the Church: "The Church operates more than 140,000 schools, 10,000 orphanages, 5,000 hospitals and some 16,000 other health clinics". Caritas, the relevant charitable umbrella organization, gives $2-4b per year on its own, and that's not including the many, many operations run by religious orders not under that umbrella, or by the hundreds of thousands of parishes around the world (most of which operate charitable operations of their own).
And yet, rationalists are totally happy criticizing the Catholic Church -- not that I'm complaining, but it seems a bit hypocritical.
Veedrac 7 hours ago [-]
I appreciate the good these organizations do, but I don't think that's the right measure of it. A person wouldn't in expectation serve global health better by becoming Catholic than by joining EA. That Catholicism is large isn't the same as them being effective at solving malaria. EA is tiny relative to the Church but still manages to support within an order of magnitude the funding you mentioned here, with exact numbers depending on how you count.
Similarly, it's not like government funding is an overlooked part of EA. Working on government and government aid programs is something EA talks about, high leverage areas like policy especially. If there's a more standard government role that an individual can take that has better outcomes than what EAs do, that would be an important update and I'd be interested in hearing it. But the criticism that EA is just not large enough is hard to action on, and more of a work in progress than a moral failing.
catern 9 hours ago [-]
Rationalists and EAs spend far more time praising the Catholic Church and other religious groups than criticizing them - since they spend essentially no time criticizing them, and do occasionally praise them.
sfblah 14 hours ago [-]
How do they escape the reality that the Earth will one day be destroyed, and that it's almost certainly impossible to ever colonize another planetary system? Just suicide out?
wat10000 12 hours ago [-]
If you value maximizing the number of human lives that are lived, then even “almost certainly impossible” is enough to justify focusing a huge amount of effort on that. Maybe interstellar colonization is a one in a million shot, but it would multiply the number of human lives by billions or trillions or more.
jandrese 15 hours ago [-]
One example is Newcomb's problem. It presupposes a ridiculous scenario where a godlike being acts irrationally and then people try to base their life around "winning" the game that will never ever happen to them.
TimTheTinker 17 hours ago [-]
> their initial conditions were highly artificial
There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation).
nyeah 16 hours ago [-]
Yes, there's a name for it. They're dumbasses.
“[...] Clevinger was one of those people with lots of intelligence and no brains, and everyone knew it except those who soon found it out. In short, he was a dope." - Joseph Heller, Catch-22
https://www.goodreads.com/quotes/7522733-in-short-clevinger-...
TeMPOraL 14 hours ago [-]
You mean people projecting this problem onto them with zero evidence beyond their own preconceptions and an occasional stereotype they read online?
HN should be better than this.
nyeah 13 hours ago [-]
Let's read the thread: "There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation)."
Can people suffer from that impairment? Is that possible? If not, please explain how wrong assumptions can be eliminated without actively looking for them. If the impairment is real, what would you call its victims? Pick your own terminology.
TimTheTinker 13 hours ago [-]
Name calling may be helpful in some contexts, but I doubt it's persuasive to the people in question.
gopher_space 9 hours ago [-]
I think a huge part of the subtext to this conversation is that nothing is persuasive to the people in question. How many people in this thread have come down off the mountain to explain a point of view that's just bouncing off some dude who's never needed to develop a feedback loop for his thinking?
Calling someone a dumbass in this situation is a kindness, because the assumption is that they're capable of not being one with a little self-reflection.
comp_throw7 3 hours ago [-]
> Calling someone a dumbass in this situation is a kindness
First they laugh...
TimTheTinker 13 hours ago [-]
That may work for literature, but I was hoping for something more precise.
yawnxyz 14 hours ago [-]
Having just moved to the Bay Area, I've met a few AI "safety researchers" who seems to come from this EA / Rationalist camp, and they all behave more like preachers than thinkers / academics / researchers.
I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
comp_throw7 3 hours ago [-]
> I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
In that case I don't think you've met any of the people under discussion.
yawnxyz 2 hours ago [-]
maybe, but I Stanford scientists are a different breed of rationalists, and not Rationalists
protocolture 11 hours ago [-]
>They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic
I have read Effective Altruists like that. But I also remember seeing a lot of money donated to a bunch of really decent sounding causes because someone spent 5 minutes asking themselves what they wanted their donation to maximise, decided on "Lives saved" and figured out who is doing the best at that.
OtherShrezzing 18 hours ago [-]
They _are_ the effective altruism crowd.
maybelsyrup 6 hours ago [-]
> They remind me of the "Effective Altruism" crowd
Honestly thought they were the same people
foobar_______ 10 hours ago [-]
I really appreciate this comment. Now in my 30s, I was not even aware of EA or Rationalist groups in my 20s but your description of a person high on the smell of their own farts and putting logic as king was me. The world is so chaotic that it felt good and right that I could lean on something "true" like logic. It took me to the "correct" answers according to logic but very misguided answers as I see them now. Humans are driven by emotions and the world is probabilistic so relying so heavily on logic just seems so misguided now that I'm older.
I don’t have anything to add to the discussion except that EA was the very first thing that popped into my mind when I read Rationalists.
jhbadger 15 hours ago [-]
There is a great recent book by Adam Becker "More Everything Forever" that deals with the overlapping circles of "effective altruists", "rationalists", and "accelerationists". He's not very sympathetic to the movements as he sees them as mostly rationalizing what their adherents wanted to do anyway -- funding things like rockets and AI over feeding the needy because they see the former as helping more people in the future than dealing with real problems today.
not_your_mentat 16 hours ago [-]
The notion that our moral obligation somehow demands we reduce the suffering of wild animals in an ecosystem, living their lives as they have done since predation evolved and as they will do long after humans have ceased to be, is such a wild misunderstanding of who we are and what we are and what the universe is. I love my Bay Area friends. To quote the great Gwen Stefani, “This sh!t is bananas.”
HPsquared 17 hours ago [-]
People who confuse the map for the territory.
UncleOxidant 17 hours ago [-]
> They remind me of the "Effective Altruism" crowd
Isn't there a lot of overlap between the two groups?
I recently read a great book that examines these various groups and their commonality: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker.
Highly recommended.
j_timberlake 7 hours ago [-]
This is a perfect example of a Hacker News hot-take, you complain about their attitude while making zero reference to their actual accomplishments or failures. No one will downvote you for posting something factually incorrect, because you didn't post any facts at all.
Lots of anti-malaria and vitamin stuff (as a cheap way to save lots of lives). There are also tons of EA animal charities too, such as Humane League: https://thehumaneleague.org/our-impact
habinero 5 hours ago [-]
A weird cult doing helpful community service doesn't cancel out the harm of them being a cult.
If they want to donate to charity, they can just donate. You don't gotta make a religion out of it.
sanderjd 11 hours ago [-]
lol, I love that you identified the exact correct academic term for this phenomenon.
noosphr 12 hours ago [-]
Incidentally a good book on logic is the best antidote to that type of thinking. Once you learn the difference between a valid and a sound argument and then realize just how ambiguous every English sentence is the idea that just because you have a logical argument you have something useful in everyday life becomes laughable rather quickly.
I also think the ambiguity of meaning in natural language is why statistical llms are so popular with this crowd. You don't need to think about meaning and parsing. Whatever the llm assumes is the meaning is whatever the meaning is.
15 hours ago [-]
trod1234 16 hours ago [-]
Except with Effective Altruism (EA), its not pure logic.
Logic requires properties of metaphysical objectivity.
If you use the true meaning of words it would be called irrationality, delusion, sophism, or fallacy when such things are claimed true when in fact they are false.
mitthrowaway2 19 hours ago [-]
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
Certhas 17 hours ago [-]
I think this is a valid point. But to some degree both can be true. I often felt when reading some of these type of texts: Wait a second, there is a wealth of thinking on these topics out there; You are not at all situating all your elaborate thinking in a broader context. And there absolutely is willingness to be challenged, and (maybe less so) a willingness to be wrong. But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever. And importantly maybe there also isn't as much taste for understanding the limits of vigorous discussion and rational deduction. Adorno and Horkheimer posit a dialectic of rationality and enlightenment, Habermas tries to rebuild rational discourse by analyzing its preconditions. Yet for all the vigorous intellectualism of the rationalists, none of that ever seems to feature even in passing (maybe I have simply missed it...).
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
mitthrowaway2 17 hours ago [-]
That's a fair point. Speaking only for myself, I think I fail to understand why it's important to situate philosophical discussions in the context of all the previous philosophers who have expressed related ideas, rather than simply discussing the ideas in isolation.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
jerf 16 hours ago [-]
It may be easier to imagine someone trying to derive mathematics all by themselves, since it's less abstract. It's not that they won't come up with anything, it's that everything that even a genius can come up with in their lifetime will be something that the whole of humanity has long since come up with, chewed over, simplified, had a rebellion against, had a counter-rebellion against the rebellion, and ultimately packaged it up in a highly efficient manner into a textbook with cross-references to all sorts of angles on it and dozens of elaborations. You can't possible get through all this stuff all on your own.
The problem is less clear in philosophy than mathematics, but it's still there. It's really easy on your own terms to come up with some idea that the collective intelligence has already revealed to be fatally flawed in some undeniable manner, or at the very least, has very powerful arguments against it that an individual may never consider. The ideas that have survived decades, centuries, and even millenia against the collective weight of humanity assaulting them are going to have a certain character that "something someone came up with last week" will lack.
(That said I am quite heterodox in one way, which is that I'm not a big believer in reading primary sources, at least routinely. Personally I think that a lot of the primary sources noticeably lack the refinement and polish added as humanity chews it over and processes it and I prefer mostly pulling from the result of the process, and not from the one person who happened to introduce a particular idea. Such a source may be interesting for other reasons, but not in my opinion for philosophy.)
lechatonnoir 8 hours ago [-]
Well, sure, but mathematics is the domain for which this holds maybe the most true out of any. It's less true for fields which are not as old.
I'm not sure if this counterpoint generalizes entirely to the original critique, since certainly LessWrongers aren't usually posting about or discussing math as if they've discovered it-- usually substantially more niche topics.
lukev 12 hours ago [-]
Did you discover it from first principles by yourself because it's a natural conclusion anyone would reach if they ponder the subject?
Or because western culture reflects this theme continuously through all the culture and media you've immersed in since you were a child?
Also the idea is definitely not new to Descartes, you can find echoes of it going back to Plato, so your idea isn't wrong per se. But I think it underrates the effect to which our philosophical preconceptions are culturally constructed.
lazyasciiart 16 hours ago [-]
Odds are good that the millions of people who have also read and considered these ideas have added to what you came up with at 6. Odds are also high that people who have any interest in the topic will probably learn more by reading Descartes and Kant and the vast range of well written educational materials explaining their thoughts at every level. So if you find yourself telling people about these ideas frequently enough to have opinions on how they respond, you are doing both yourself and them a disservice by not bothering to learn how the ideas have already been criticized and extended.
Certhas 16 hours ago [-]
It really depends on why you are having a philosophical discussion. If you are talking among friends, or just because you want to throw interesting ideas around, sure! Be free, have fun.
I come from a physics background. We used to (and still) have a ton of physicists who decide to dable in a new field, secure in their knowledge that they are smarter than the people doing it, and that anything worthwhile that has already been thought of they can just rederive ad hoc when needed (economists are the only other group that seems to have this tendency...) [1]. It turned out every time that the people who had spent decades working on, studying, discussing and debating the field in question had actually figured important shit out along the way. They might not have come with the mathematical toolbox that physicists had, and outside perspectives that challenge established thinking to prove itself again can be valuable, but when your goal is to actually understand what's happening in the real world, you can't ignore what's been done.
Here's a very simple explanation as to why it's helpful from a "first principles" style analogy.
Suppose a foot race. Choose two runners of equal aptitude and finite existence. Start one at mile 1 and one at mile 100. Who do you think will get farther?
Not to mention, engaging in human community and discourse is a big part of what it means to be human. Knowledge isn't personal or isolated, we build it together. The "first principles people" understand this to the extent that they have even built their own community of like minded explorers, problem is, a big part of this bond is their choice to be willfully ignorant of large swaths of human intellectual development. Not only is this stupid, it also is a great disservice to your forebears, who worked just as hard to come to their conclusions and who have been building up the edifice of science bit by bit. It's completely antithetical to the spirit of scientific endeavor.
Aurornis 13 hours ago [-]
> But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever
This is a feature, not a bug, for writers who hold an opinion on something and want to rationalize it.
So many of the rationalist posts I've read through the years come from someone who has an opinion or gut feeling about something, but they want it to be seen as something more rigorous. The "first principles" writing style is a license to throw out the existing research on the topic, including contradictory evidence, and construct an all new scaffold around their opinion that makes it look more valid.
I use the "SlimeTimeMoldTime - A Chemical Hunger" blog series as an example because it was so widely shared and endorsed in the rationalist community: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-p... It even received a financial grant from Scott Alexander of Astral Codex Ten
Actual experts were discrediting the series from the first blog post and explaining all of the author's errors, but the community soldiered on with it anyway, eventually making the belief that lithium in the water supply was causing the obesity epidemic into a meme within the rationalist community. There's no evidence supporting this and countless take-downs of how the author misinterpreted or cherry-picked data, but because it was written with the rationalist style and given the implicit blessing of a rationalist figurehead it was adopted as ground truth by many for years. People have been waking up to issues with the series for a while now, but at the time it was remarkable how quickly the idea spread as if it was a true, novel discovery.
dwohnitmok 11 hours ago [-]
I don't read HN or LW all that often, but FWIW I actually learned about SlimeMoldTimeMold's "A Chemical Hunger" series from HN and then read its most famous takedown from LessWrong: https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (I don't remember any detailed takedowns of SlimeMoldTimeMold coming before that article, but maybe there are).
I think that SlimeMoldTimeMold's rise and fall was actually a pretty big point in favor of the "rationalist community".
Aurornis 8 hours ago [-]
> I think that SlimeMoldTimeMold's rise and fall was actually a pretty big point in favor of the "rationalist community".
That feels like revisionist history to me. It rose to fame in LessWrong and SlateStarCodex, was promoted by Yudkowski, and proliferated for about a year and half before the takedowns finally got traction.
While it was the topic du jour in the rationalist spaces it was very difficult to argue against. I vividly remember how hard it was to convince anyone that SMTM wasn't a good source at the time, because so many people saw Yudkowski endorse it, saw Scott Alexander give it a shout out, and so on.
Now Yudkowski has gone back and edited his old endorsement, it has disappeared from the discourse, and many want to pretend the whole episode never happened.
> (I don't remember any detailed takedowns of SlimeMoldTimeMold coming before that article, but maybe there are).
Exactly my point. It was criticized widely outside of the rationalist community, but the takedowns were all dismissed because they weren't properly rationalist-coded. It finally took someone writing it up in the form of rationalist rhetoric and seeding it into LessWrong to break the spell.
This is the trend with rationalist-centric contrarianism: You have to code your articles with the correct prose, structure, and signs to get uptake in the rationalist community. Once you see it, it's hard to miss.
dwohnitmok 5 hours ago [-]
> It was criticized widely outside of the rationalist community, but the takedowns were all dismissed because they weren't properly rationalist-coded.
Do you have any examples of this that predate that LW article? Ideally both the critique and its dismissal but just the critique would be great. The original HN submission had a few comments critiquing it but I didn't see anything in depth (or for that matter as strident).
EnPissant 4 hours ago [-]
Where is the evidence that Yudkowsky edited an old post?
voidhorse 16 hours ago [-]
You're spot on here, and I think this is probably also why they appeal to programmers and people in software.
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
nyeah 16 hours ago [-]
Yeah, though I think you may be exaggerating how often the "genius insights" rise to the level of correct restatements of basic facts. That happens, but it's not the rule.
I grew up with some friends who were deep into the early roots of online rationalism, even slightly before LessWrong came online. I've been around long enough to recognize the rhetorical devices used in rationalist writings:
> Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
There's a lot of in-group signaling in rationalist circles like the "epistemic status" taglines, posting predictions, and putting your humility on show.
This has come full-circle, though, and now rationalist writings are generally pre-baked with hedging, both-sides takes, escape hatches, and other writing tricks that make it easier to claim they weren't entirely wrong in the future.
A perfect exaple is the recent "AI 2027" doomsday scenario that predicts a rapid escalation of AI superpowers followed by disaster in only a couple years: https://ai-2027.com/
If you read the backstory and supporting blog posts from the authors they are filled to the brim with hedges and escape hatches. Scott Alexander wrote that it was something like "the 80th percentile of their fast scenario", which means when it fails to come true he can simple say it wasn't actually his median prediction anyway and that they were writing about the fast scenario. I can already predict that the "We were wrong" article will be more about what they got right with a heavy emphasis on the fact that it wasn't their real median prediction anyway.
I think this group relies heavily on the faux-humility and hedging because they've recognized how powerful it is to get people to trust them. Even the comment above is implying that because they say and do these things, they must be immune from the criticism delivered above. That's exactly why they wrap their posts in these signals, before going on to do whatever they were going to do anyway.
mitthrowaway2 13 hours ago [-]
Yes, I do think that these hedging statements make them immune from the specific criticism that I quoted.
If you want to say their humility is not genuine, fine. I'm not sure I agree with it, but you are entitled to that view. But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
Aurornis 13 hours ago [-]
> es, I do think that these hedging statements make them immune from the specific criticism that I quoted.
That's my point: Their rhetorical style is interpreted by the in-group as a sort of weird infallibility. Like they've covered both sides and therefore the work is technically correct in all cases. Once they go through the hedging dance, they can put forth the opinion-based point they're trying to make in a very persuasive way, falling back to the hedging in the future if it turns out to be completely wrong.
The writing style looks different depending on where you stand: Reading it in the forward direction makes it feel like the main point is very likely. Reading it in the backward direction you notice the hedging and decide they were also correct. Yet at the time, the rationalist community attaches themselves to the position being pushed.
> But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
lechatonnoir 8 hours ago [-]
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
edit: my apologies, that was someone else in the thread. I do feel like between the two comments though there is a "damned if you do, damned if you don't". (The original quote above I found absurd upon reading it.)
reducesuffering 47 minutes ago [-]
Haha my thoughts exactly. This HN thread is simultaneously criticizing them for being too assured, not considering other possibilities, and hedging that they may not be right and other plausibilities exist.
“Damned if you do” indeed…
sanderjd 10 hours ago [-]
Yeah I think you're getting at what my skepticism stems from: The article with the 55% certain epistemic status and the article with the 95% certain epistemic status are both written with equal persuasive oomph.
In most writing, people write less persuasively on topics they have less conviction in.
mitthrowaway2 12 hours ago [-]
> That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
Ok, let's scroll up the thread. When I refer to "the specific criticism that I quoted", and when you say "implying that because they say and do these things, they must be immune from the criticism delivered above": what do you think was the "criticism delivered above"? Because I thought we were talking about contrarian1234's claim to exactly this "strawman", and you so far have not appeared to not agree with me that this criticism was invalid.
Veedrac 13 hours ago [-]
If putting up evidence about how people were wrong in their predictions, I suggest actually pointing at predictions that were wrong, rather than on recent predictions about the future that that you disagree over how they will resolve. If putting up evidence about how people make excuses for failing predictions, I suggest actually showing them do so, rather than projecting that they will do so and blaming them for your projection.
Aurornis 13 hours ago [-]
It's been a while since I've engaged in rationalist debates, so I forgot about the slightly condescending, lecturing tone that comes out when you disagree with rationalist figureheads. :) You could simply ask "Can you provide examples" instead of the "If you ____ then I suggest ____" form.
My point wasn't to nit-pick individual predictions, it was a general explanation of how the game is played.
Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
- He predicted at least $250 million in damages from Black Lives Matter protests.
- He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
- He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
It's also noteworthy to read that a lot of his predictions are about his personal life, his own blogging actions, or [redacted] things. These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
chandlerswift 6 hours ago [-]
Am I reading all of these backwards? You say
> He predicted at least $250 million in damages from Black Lives Matter protests.
He says
> 5. At least $250 million in damage from BLM protests this year: 30%
which, by my reading means he assigns it greater-than-even odds that _less_ than $250 million dollars in damages happened (I have no understanding of whether or not this result is the case, but my reading of your post suggests that you believe that this was indeed the outcome).
You say
> He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
while he says
> Vitamin D is _not_ generally recognized (eg NICE, UpToDate) as effective COVID treatment: 70%
(emphasis mine)
sanderjd 10 hours ago [-]
For what it's worth, your comments in this thread have been very good descriptions of things I became frustrated with after once being quite interested / enthralled with this community / movement!
(I feel like you're probably getting upvotes from people who feel similarly, but sometimes I feel like nobody ever writes "I agree with you" comments, so the impression is that there's only disagreement with some point being made.)
Aurornis 8 hours ago [-]
Thanks for sharing. You summed it up well: The community feels like a hidden gem when you first discover it. It feels like there's an energy of intelligence buzzing about interesting topics and sharing new findings.
Then you start encountering the weirder parts. For me, it was the group think and hero worship. I just wanted to read interesting takes on new topics, but if you deviated from the popular narrative associated with the heroes (Scott Alexander, Yudkowski, Cowen, Aaronson, etc.) it felt like the community's immune system identified you as an intruder and started attacking.
I think a lot of people get drawn into the idea of it being a community where they finally belong. Especially on Twitter (where the latest iteration is "TPOT") it's extraordinarily clique-ish and defensive. It feels like high school level social dynamics at play, except the players are equipped with deep reserves of rhetoric and seemingly endless free time to dunk on people and send their followers after people who disagree. It's a very weird contrast to the ideals claimed by the community.
mitthrowaway2 5 hours ago [-]
Well nobody sent me; instead I had the strange experience of waking up this morning, seeing an interesting post about Scott Aaronson identifying as a rationalist, and when I check the discussion it's like half of HN has decided it's a good opportunity to espouse everything they dislike about this group of people.
Since when is that what we do here? If he'd written that he'd decided to become vegetarian, would we all be out here talking about how vegetarians are so annoying and one of them even spat on my hamburger one time?
And then of these uncalled-for takedowns, several -- including yours -- don't even seem to be engaging in good-faith discourse, and seem happy to pile on to attacks even when they're completely at odds with their own arguments.
I'm sorry to say it but the one who decided to use their free time to leer at people un-provoked over the internet seems to be you.
zahlman 6 hours ago [-]
> So I forgot about the slightly condescending, lecturing tone that comes out when you disagree with rationalist figureheads. :)
How was it condescending or lecturing?
> You could simply ask "Can you provide examples" instead of the "If you ____ then I suggest ____" form.
Why is that not equally condescending or lecturing?
Veedrac 7 hours ago [-]
Please be civil.
I genuinely don't understand how you can point to someone's calibration curve where they've broadly done well, and cherry pick the failed predictions they made, and use this not just to claim that they're making bad predictions but that they're slimy about admitting error. What more could you possibly want from someone than a tally of their prediction record graded against the probability they explicitly assigned to it?
One man's modus ponens, as it goes.
zahlman 6 hours ago [-]
> it was a general explanation of how the game is played.
You seem to be trying to insinuate that Alexander et. al. are pretending to know how things will turn out and then hiding behind probabilities when they don't turn out that way. This is missing the point completely. The point is that when Alexander assigns an 80% probability to many different outcomes, about 80% of them should occur, and it should not be clear to anyone (including Alexander) ahead of time which 80%.
> He predicted at least $250 million in damages from Black Lives Matter protests.
Edit: I see that the prediction relates to 2021 specificially. In the wake of 2020, I think it was perfectly reasonable to make such a prediction at that confidence level, even if it didn't actually turn out that way.
> He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
> He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
If you make many predictions at 70-80% confidence, as he does, you should expect 20-30% of them not to come true. It would in fact be a failure (underconfidence) if they all came true. You are in fact citing a blog post that is exactly about a self-assessment of those confidence levels.
Also, he gave a 70% chance to Vitamin D not being generally recognized as a good COVID treatment.
> These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
The point is not "overall accuracy", but overall calibration - i.e., whether his assigned probabilities end up making sense and being statistically validated.
You have done nothing to establish that any correlation between the category of prediction and his accuracy on them.
sanderjd 11 hours ago [-]
I read rationalist writing for a very long time, and eventually concluded that this part of it was, not universally but predominantly, performative. After you read enough articles from someone, it is clear what they have conviction in, even when they are putting up disclaimers saying they don't.
bakuninsbart 18 hours ago [-]
Weirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
mitthrowaway2 17 hours ago [-]
I don't think this validates the criticism that "they don't really ever show a sense of[...] maybe I'm wrong".
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
salynchnew 16 hours ago [-]
> caring a lot about shrimp welfare (no one else does).
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
freejazz 13 hours ago [-]
>I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I have no belief any particular individual can do anything about shrimp welfare more than they can about the intractable problems we do face.
lyu07282 7 hours ago [-]
> I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I think its a result of its complete denial of and ignorance of politics. Because rationalist and effective altruist movements make a whole lot more sense, if you realize they are talking about deeply social and political issues with all politics removed from it. Its technocrat-ism the poster child of the kind of "there is no alternative" neoliberalism that everyone in the western world was indoctrinated into since the 80s.
Its a fundamental contradiction, we don't need to talk about politics because we already know liberal democracies and free-market capitalism is the best we ever going to achieve, faced with the numerous intractable problems we face that can not possibly be related to liberal democracies and free-market capitalism.
The problem is: How do we talk about any issue the world is facing today without ever challenging or even talking about any of the many assumptions the western liberal democracies are based upon? In other words: the problems we face are structural/systemic, but we are not allowed to talk about the structures/systems. That's how you end up with space flight and shrimp welfare and AGI/ASI catastrophizing taking up 99% of everything these people talk about. It's infantile, impotent liberal escapism more than anything else.
gjm11 16 hours ago [-]
> both can be true
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
ToValueFunfetti 17 hours ago [-]
>they bought mansions for team building
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
As far as I know it's never hosted an impress-the-oligarch fundraiser, which as you say would at least have a logic behind it[1] even if it might seem distasteful.
For a philosophy which started out from the point of view that much of mainstream aid was spent with little thought, it was a bit of an end of Animal Farm moment.
(to their credit, a lot of people who identified as EAs were unhappy. If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
[1]a pretty shaky one considering how easy it is to impress American billionaires with Oxford architecture without going to the expense of operating a nearby mansion as a venue, particularly if you happen to be a charitable movement with strong links to the university...
[2]obviously people are only objecting to it for PR purposes because they're not smart enough to realise that capital appreciates and that venues cost money, and definitely not because they've got a pretty good idea how expensive upkeep on little used medieval venues are and how many alternatives exist if you really care about the cost effectiveness of your retreat, especially to charitable movements affiliated with a university...
Filligree 8 hours ago [-]
> If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
I’m a bit confused by this one.
Are you saying that no-one who identifies as rationalist sneered at the objections? Because I don’t think that’s true.
notahacker 3 hours ago [-]
Nope, I'm implying the people sneering at the objections were the self proclaimed rationalists. Other, less contrarian thinkers were more inclined to spot that a $15m heritage building might not be the epitome of cost-effective venues...
ToValueFunfetti 13 hours ago [-]
Ah, fair enough! I had heard the "hosting wealthy donors" as the primary motivation, but it appears to be secondary. My bad.
>As far as I know it's never hosted an impress-the-oligarch fundraiser
As far as I know, they only hosted 3 events there before deciding to sell, so this is low-information.
Dracophoenix 17 hours ago [-]
[dead]
freejazz 13 hours ago [-]
>Are you sure you're not painting this group with an overly-broad brush?
"Aren't these the people who"...
> And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
What's the value of that if it doesn't appear to be reasonably put to their own ideas. What you described otherwise is just another form of the exact kind of self-congratulation often (reasonably, IMO) lobbed at these "people"
hiddencost 18 hours ago [-]
They're behind Anthropic and were behind openai being a nonprofit. They're behind the friendly AI movement and effective altruism.
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
JamesBarney 32 minutes ago [-]
What ngo's are the rationalists or effective altruists responseable for killing or defunding?
mitthrowaway2 18 hours ago [-]
I don't see how this is a response to what I wrote. Can you explain?
fatbird 18 hours ago [-]
I think GP is saying that their epistemic humility is a pretense, a pose. They do a lot of throat clearing about quantifying their certainty and error checking themselves, and then proceed to bring about very consequential outcomes anyway for absurd reasons with predictable side effects that they should have considered but didn't.
notahacker 14 hours ago [-]
Yeah. It's not that they never express uncertainty so much as they like to express uncertainty as arbitrarily precise and convenient-looking expected value calculations which often look like far more of a rhetorical tool to justify their preferences (I've accounted for the uncertainty and even given a credence as low as 14.2% I'm still right!) than a decision making heuristic...
NoGravitas 19 hours ago [-]
I've always seen the breathless Singularitarian worrying about AI Alignment as a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman, from algorithmic bias, to policy-washing, to energy costs and acceleration of wealth concentration. It also leads to so-called longtermism - discounting the benefits of solving current real problems and focusing entirely on solving a hypothetical one that you think will someday make them all irrelevant.
tuveson 19 hours ago [-]
My feeling has been that it’s a lot of people that work on B2B SaaS that are sad they hadn’t gotten the chance to work on the Manhattan Project. Be around the smartest people in your field. Contribute something significant (but dangerous! And we need to talk about it!) to humanity. But yeah computer science in the 21st century has not turned out to be as interesting as that. Maybe just as important! But Jeff Bezos important, not Richard Feynman important.
HPsquared 15 hours ago [-]
"Overproduction of elites" is the expression.
thom 18 hours ago [-]
The Singularitarians were breathlessly worrying 20+ years ago, when AI was absolute dogshit - Eliezer once stated that Doug Lenat was incautious in launching Eurisko because it could've gone through a hard takeoff. I don't think it's just an act to launder their evil plans, none of which at the time worked.
notahacker 14 hours ago [-]
Fair. OpenAI totally use those arguments to launder their plans, but that saga has been more Silicon Valley exploiting longstanding rationalist beliefs for PR purposes than rationalists getting rich...
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.
salynchnew 16 hours ago [-]
Yeah, people were generally terrified of this stuff back before you could make money off of it.
philipov 19 hours ago [-]
yep, the biggest threat posed by AI comes from the capitalists who want to own it.
impossiblefork 19 hours ago [-]
I actually think the people developing AI might well not get rich off it.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
cogman10 19 hours ago [-]
It smacks of a goldrush. The winners will be the people selling shovels (nVidia) and housing (AWS). It may also be the guides showing people the mountains (Cursor, OpenAI, etc).
I suspect you'll see a few people "win" or strike it rich with AI, the vast majority will simply be left with a big bill.
nradov 19 hours ago [-]
When railroads were first being built across the continental USA, those companies also had high valuations (for the time). Most of them ultimately went bankrupt or were purchased for a fraction of their peak valuation. But the tracks remained, and many of those routes are still in use today.
kridsdale1 18 hours ago [-]
Exact same thing happened with fiber optic cable layers in the late 1990s. On exactly the same routes!
cguess 17 hours ago [-]
It's because the land-rights were more valuable than the steel rails, which the fiber optic companies bought up.
pavlov 18 hours ago [-]
And today the railroad system in the USA sucks compared to other developed countries and even China.
It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
Perhaps American AI efforts will one day be viewed similarly. “Yeah, they had an early rush, lots of innovation, high valuations, and robber barons competing. Today it’s just stale old infra despite the high-energy start.”
impossiblefork 17 hours ago [-]
I think it's unlikely that AI efforts will go as railroads have. I think being an AI foundation model company is more like being an airplane builder than like a railway company, since you develop your technology.
etblg 12 hours ago [-]
Plenty of those that similarly went bankrupt over the years, and now the USA mostly has Boeing that's reached a set of continual crises and being propped up by the government.
lupusreal 18 hours ago [-]
America's passenger rail sucks, it couldn't compete with airplanes and every train company got out of the business, abandoning it to the government. But America does have a great deal of freight rail which sees a lot of use (much more than in Europe, I don't know how it compares to China though.)
lazyasciiart 16 hours ago [-]
One reason the passenger service sucks is that the freight rail companies own the tracks, and are happy to let a passenger train sit behind a freight train for a couple hours waiting for space in the freight yard so it can get out of the way.
9rx 13 hours ago [-]
> freight rail companies own the tracks
Humans are also freight, of course. It is not like the rail companies really care about what kind of fright is on the trains, so long as it is what the customer considers most important (read: most profitable). Humans are deprioritized exactly because they aren't considered important by the customer, which is to say that the customer, who is also the freight in this case, doesn't really want to be on a train in the first place. The customer would absolutely ensure priority (read: pay more, making it known that they are priority) if they wanted to be there.
I understand the train geeks on the internet find it hard to believe that not everyone loves trains like they do, but the harsh reality is that the average American Joe prefers other means of transportation. Should that change in the future, the rail network will quickly accommodate. It has before!
lupusreal 15 hours ago [-]
The root cause is Americans, generally, prefer any mode of transit other than rail, so passenger rail isn't profitable, so train companies naturally prioritize freight.
For what it's worth, I like traveling by train and do so whenever I can, but I'm an outlier. Most Americans look at the travel times and laugh at the premise of choosing a train over a plane. And when I say they look at the travel times, I don't mean they actually bother to look up train routes. They just know that airplanes are several times faster. Delays suffered by trains never get factored into the decision because trains aren't taken seriously in the first place.
China hasn't shown that their railroad buildout will work. My understanding is they currently aren't making enough return to payoff debt, yet alone plan for future maintenance. Historically the command economy type stuff looks great in the early years, it's later on we see if that is reality.
You are comparing USA today to the robber baron phase, whose to say China isn't in the same phase? Lots of money being thrown at new railroads and you have Chinese leaders and best and management leaders chasing that money. When happens when it goes low budget/maintenance mode?
entropicdrifter 17 hours ago [-]
The USA today is in a robber baron phase. We only briefly left it for about 2 generations due to the rise of labor power in the late 1800s/early 1900s. F.D.R. was the compromise president put into place to placate labor and prevent a socialist revolution.
9rx 17 hours ago [-]
> And today the railroad system in the USA sucks compared to other developed countries and even China.
Nonsense. The US has the largest freight rail system in the world, and is considered to have the most efficient rail system in the world to go along with it.
There isn't much in the way of passenger service, granted, but that's because people in the US aren't, well, poor. They can afford better transportation options.
> It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
It initially built out the passenger rail just fine, but then evolution saw better options come along. Passenger rail disappeared because it no longer served a purpose. It is not like, say, Japan where the median household income is approaching half that of Mississippi and they hold on to rail because that's what is affordable.
hodgesrm 10 hours ago [-]
This analysis seems wrong for at least a couple of reasons.
1. Freight is easier to manage and has better economics on a dedicated network. The US freight network is extremely efficient as others have pointed out. Other networks, e.g., Germany, instead prioritized passenger service. In Germany rail moves a small proportion of freight (19%) compared to trucks. [0] It's really noticeable on the Autobahn and unlike the US where a lot of truck traffic is intermodal loads.
2. The US could have better rail service by investing in passenger networks. Instead we have boondoggles like the California high-speed rail project which has already burned through 10s of billions of dollars with no end in sight. [1] Or the New Jersey Transit system which I had the pleasure to ride on earlier today to Newark Airport. It has pretty good coverage but needs investment.
> This analysis seems wrong for at least a couple of reasons.
How so?
> The US freight network is extremely efficient as others have pointed out.
'Others' being literally the comment you replied to.
> The US could have better rail service by investing in passenger networks.
Everything there is can be improved, of course, but to what significance here?
Detrytus 17 hours ago [-]
> There isn't much in the way of passenger service, granted, but that's because people in the US aren't, well, poor. They can afford better transportation options.
This is so misguided view... Trains (when done right) aren't "for the poor", they are great transportation option, that beats both airplanes and cars. In Poland, which isn't even close to the best, you can travel between big cities with speeds above 200km/h, and you can use regional rail for your daily commute, both those options being very comfortable and convenient, much more convenient than traveling by car.
9rx 17 hours ago [-]
Poland is approximately the same geographical size as Nevada. In the US, "between cities" is more like New York to Las Vegas, not Las Vegas to... uh, I couldn't think of another city in Nevada off the top of my head. What under-serviced route were you thinking of there?
What gives you the idea that rail would be preferable to flying for the NYC to LAS route if only it existed? Even as the crow flies it is approximately 4,000 km, meaning that at 200 km/h you are still looking at around 20 hours of travel in an ideal case. Instead of just 5 hours by plane. If you're poor an additional 15 hours wasted might not mean much, but when time is valuable?
danans 16 hours ago [-]
> In the US, "between cities" is more like New York to Las Vegas, not Las Vegas to... uh, I couldn't think of another city in Nevada off the top of my head. What under-serviced route were you thinking of there?
Why would you constrain the route to within a specific state? In fact, right now a high-speed rail line is being planned between Las Vegas and LA.
But outside of Nevada, there are many equivalent distance routes in the US between major population centers, including:
Chicago/Detroit
Dallas/Houston
LA/SF
Atlanta/Charlotte
9rx 16 hours ago [-]
> In fact, right now a high-speed rail line is being planned between Las Vegas and LA.
Right now and since 1979!
I'll grant you that people love to plan, but it turns out that they don't love putting on their boots and picking up a shovel nearly as much.
> But outside of Nevada, there are many equivalent distance routes in the US between major population centers, including
And there is nothing stopping those lines from being built other than the lack of will to do it. As before, the will doesn't exist because better options exist.
nradov 16 hours ago [-]
There are a lot more obstacles than lack of will. There are also property rights, environmental reviews, availability of skilled workers, and lack of capital. HN users sometimes have this weird fantasy that with enough political will it's possible to make enormous changes but that's simply not how things operate in a republic with a dual sovereignty system.
9rx 15 hours ago [-]
> There are also property rights, environmental reviews, availability of skilled workers, and lack of capital.
There is no magic in this world like you seem to want to pretended. All of those things simply boil down to people. Property rights only exist because people say they do, environmental reviews only exist because people say they do, skilled workers are, well, literally people, and the necessary capital is already created. If the capital is being directed to other purposes, it is only because people decided those purposes are more important. All of this can change if the people want it to.
> HN users sometimes have this weird fantasy that with enough political will it's possible to make enormous changes but that's simply not how things operate in a republic with a dual sovereignty system.
Hell, the republic and dual sovereignty system itself only exists because that's what people have decided upon. Believe it or not, it wasn't enacted by some mythical genie in the sky. The people can change it all on a whim if the will is there.
The will isn't there of course, as there is no reason for the will to be there given that there are better options anyway, but if the will was there it'd be done already (like it already is in a few corners of the country where the will was present).
Kon-Peki 15 hours ago [-]
> Chicago/Detroit
There has been continuous regularly scheduled passenger service between Chicago and Detroit since before the Civil War. The current Amtrak Wolverine runs 110 MPH (180 KPH) for 90% of the route, using essentially the same trainset that Brightline plans to use.
danans 13 hours ago [-]
Fair point. Last time I took that train (mid 1990s) it didn't run to Pontiac or Troy, and I recall there being very infrequent service. A far as I know, it's not the major mode of passenger transit between Detroit and Chicago. Cars are. That might be because of the serious lack of last-mile transit connectivity in the Detroit area.
Kon-Peki 13 hours ago [-]
Cars are definitely the major mode. Lots of quick flights, too.
They’ve made a lot of investments since the 1990s. It’s much improved, though perhaps not as nice as during the golden years when it was a big part of the New York Central system (from the 1890s to the 1960s they had daily trains that went Boston/NYC/Buffalo/Detroit/Chicago through Canada from Niagara Falls to Windsor).
During the first Trump administration, Amtrak announced a route that would go Chicago/Detroit/Toronto/Montreal/Quebec City using that same rail tunnel underneath the Detroit River. It was supposed to start by 2030. We’ll see if it happens.
Detrytus 16 hours ago [-]
Also, if you go all in and build something equivalent to Chinese bullet trains (that go with speeds up to 350km/h) you could do for example NY to Chicago in 3.5 hours, or even NY to Miami in 6 hours :-D (I know, not very realistic)
gwd 14 hours ago [-]
Not sure how we got from Scott A being a rationalist to trains, but since we're here, I want to say:
I've taken a Chinese train from Zhengzhou, in central China, to Shenzhen, and it was fantastic. Cheap, smooth, fast, lots of legroom, easy to get on and off or walk around to the dining car. And, there's a thing where boiling hot water is available, so everyone brings instant noodle packs of every variety to eat on the train.
Can't even imagine what the US would be like if we had that kind of thing.
A_D_E_P_T 1 hours ago [-]
Similar experience here. I'd always prefer it to flying.
Getting to the airport in most major cities takes an hour, and then there's the whole pre-flight security theatre, and the flights themselves are rarely pleasant. To add insult to injury, in the US it's usually a $50 cab ride to the airport and there are $28 ham-and-cheese sandwiches in the terminal if you get hungry.
In China and Japan the trains are centrally located, getting on takes ten minutes, and the rides are extremely comfortable. If such a thing existed in the US I think it would be extremely popular. Even if it was just SF-LA-Vegas.
AnimalMuppet 16 hours ago [-]
Is being build? Um, not quite. Is being planned. Is arranging for right-of-way. But to the best of my knowledge, actual construction has not started.
fragmede 16 hours ago [-]
If you can't think of another city in Nevada off the top of your head, are you even American? (Reno.)
Anyway, New York to Las Vegas spans most of the US. There are plenty of routes in the US where rail would make sense. Between Boston, New Haven, New York City, Philadelphia, Baltimore, and Washington, D.C. Which has the Amtrak Acela. Or perhaps Miami to Orlando. Which has a privately funded high speed rail connection called Brightline that runs at 200 km/h who's ridership was triple what had been expected at launch.
9rx 16 hours ago [-]
> are you even American?
I am, thankfully, not.
> Which has a privately funded high speed rail connection called Brightline that runs at 200 km/h
Which proves that when the will is there, it will be done. The only impediment in other places is simply the people not wanting it. If they wanted it, it would already be there.
The US has been here before. It built out a pretty good, even great, passenger rail network a couple of centuries ago when the people wanted it. It eventually died out simply because the people didn't want it anymore.
If they want it again in the future, it will return. But as for the moment...
ambicapter 17 hours ago [-]
Yeah, a cheap transportation option is a terrible thing to have... /s
9rx 16 hours ago [-]
It's not that it would be terrible, but in the real world people are generally lazy and will only do what they actually want to see happen. Surprisingly, we don't yet have magical AI robots that autonomously go around turning all imagined ideas into reality without the need for human grit.
Since nobody really wants passenger rail in the US, they don't put in the effort to see that it exists (outside of some particular routes where they do want it). In many other countries, people do want board access to passenger rail (because that's all they can afford), so they put in the effort to have it.
~200 years ago the US did want passenger rail, they put in the work to realize it, and it did have a pretty good passenger rail network at the time given the period. But, again, better technology came along, so people stopped maintaining/improving what was there. They could do it again if they wanted to... But they don't.
lazyasciiart 16 hours ago [-]
What an ironic side thread in a conversation about people who are confidently ignorant.
9rx 16 hours ago [-]
I am not sure the irony works as told unless software and people are deemed to be the same thing. But what is there to suggest that they are?
16 hours ago [-]
bilbo0s 19 hours ago [-]
Just checked.
The problem is the railroads were purchased by the winners. Who turned out to be the existing winners. Who then went on to continue to win.
On the one hand, I guess that's just life here in reality.
On the other, man, reality sucks sometimes.
baq 18 hours ago [-]
That’s capitalism for you - loser’s margins were winner’s opportunities.
Imagine if they were bought by losers.
parpfish 19 hours ago [-]
Or the propagandists that use it
bilbo0s 18 hours ago [-]
They won't be allowed to use it unless they serve the capitalists who own it.
It's not social media. It's a model the capitalists train and own. Best the rest of us will have access to are open source ones. It's like the difference between trying to go into court backed by google searches as opposed to Lexis/Nexis. You're gonna have a bad day with the judge.
Here's hoping the open source stuff gets trained on quality data rather than reddit and 4chan. Given how the courts are leaning on copyright, and lack of vetted data outside copyright holder remit, I'm not sanguine about the chances of parity long term.
int_19h 5 hours ago [-]
Looking at Grok especially, it doesn't feel like a given that you can have a true SOTA model that is properly brainwashed.
thrance 1 hours ago [-]
To be fair, Musk is probably the least subtle megabillionaire of them all, and that reflects in the odd behavior of his silicon child. I don't doubt in the competency of the likes of Thiel to build their techo-monarchy.
thrance 17 hours ago [-]
The propagandists serve the capitalists, so it's all the same.
conception 9 hours ago [-]
Nah, it’s from state actors using the technology to control and wage war by far.
NoMoreNicksLeft 17 hours ago [-]
>s a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman,
Anything that can't be self-improving or superhuman almost certainly isn't worthy of the moniker "AI". A true AI will be born into a world that has already unlocked the principles of intelligence. Humans in that world would be capable themselves of improving AI (slowly), but the AI itself will (presumably) run on silicon and be a quick thinker. It will be able to self-improve, rapidly at first, and then more rapidly as its increased intelligence allows for even quicker rates of improvement. And if not superhuman initially, it would soon become so.
We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
danans 17 hours ago [-]
> We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
I think that the OP's point was that it doesn't matter whether it's "real AI" or not. Even if it's just a glorified auto-correct system, it's one that has the clear potential to overturn our information/communication systems and our assumptions about individuals' economic value.
NoMoreNicksLeft 16 hours ago [-]
If that has the potential to ruin economies, then the economic rot is so much more profound than anyone (me included) ever realized.
jay_kyburz 13 hours ago [-]
I think when the GP says "our assumptions about individuals' economic value." they mean half the workforce becoming unemployed because the auto corrector can do it cheaper.
That's going to be a swift kick to your economy, no matter how strong.
stickfigure 15 hours ago [-]
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Have you ever read Scott Alexander's blog (Slate Star Codex, now Astral Codex X)? It's full of doubt and self-questioning. The guy even keeps a public list of his mistakes:
I'll admit my only touchpoint to the "rationalist community" is this blog, but I sure don't get "full of themselves" from that. Quite the contrary.
Avicebron 23 hours ago [-]
Yeah the "rational" part always seemed a smokescreen for the ability to produce and ingest their own and their associates methane gases.
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
shermantanktop 19 hours ago [-]
The meta with these people is “my brilliance comes with an ego that others must cater to.”
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
dkarl 19 hours ago [-]
It's a conflict as old as time. What do you do when an argument leads to an unexpected conclusion? I think there are two good responses: "There's something going on here, so let's dig into it," or, "There's something going on here, but I'm not going to make time to dig into it." Both equally valid.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
bilbo0s 18 hours ago [-]
Might kind of point to real life people having too much of what is now called, "rationality", and very little of what used to be called "wisdom"?
yamazakiwi 18 hours ago [-]
Wisdom tends to resemble shallow aphorisms despite being framed as universal. Rather than interrogating wisdom's relevance or depth, many people simply repeat it uncritically as a shortcut to insight. This reflects more about how people use wisdom than the content itself, but I believe that behavior contributes to our perception of the importance of wisdom.
It frequently reduces complex problems into comfortable oversimplifications.
Maybe you don't think that is real wisdom, and maybe that's sort of your point, but then what does real wisdom look like? Should wisdom make you considerate of the multiple contexts it does and doesn't affect? Maybe the issue is we need to better understand how to evaluate and use wisdom. People who truly understand a piece of wisdom should communicate deeply rather than parroting platitudes.
Also to be frank, wisdom is a way of controlling how others perceive a problem, and is a great way to manipulate others by propping up ultimatums or forcing scope. Much of past wisdom is unhelpful or highly irrelevant to modern life.
e.g. "Good things come to those who wait."
Passive waiting rarely produces results. Initiative, timing, and strategic action tend to matter more than patience.
15 hours ago [-]
aspenmayer 10 hours ago [-]
I have enjoyed learning the context and original meaning behind many of these aphorisms and words of wisdom, and that is the true utility of learning them, so that you can subvert them and invert them. Cultural touchstones have value because of shared context. The specific utility and applicability of wisdom varies because conversation is context specific and outcome dependent based on the adversarial vs collaborative nature of the dialogue.
Cthulhu_ 19 hours ago [-]
It feels like a shield of sorts, "I am a rationalist therefore my opinion has no emotional load, it's just facts bro how dare you get upset at me telling xyz is such-and-such you are being irrational do your own research"
but I don't know enough about it, I'm just trolling.
TacticalCoder 22 hours ago [-]
[dead]
felipeerias 23 hours ago [-]
The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
ImaCake 20 hours ago [-]
>The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
lsp 19 hours ago [-]
A lot of people really need to be reminded of this.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
ImaCake 10 hours ago [-]
Brilliant way to phrase this idea! I think its incredible that we can even manage an imperfect way to escape our "social" brain. It's clearly very powerful - math, and thus all of science, only exist because of how weird predaliction to thinking abstractly enough to (sorta) break our social bounds. At least when thinking in these terms, I am sure you can poke holes in it.
baxtr 20 hours ago [-]
Yes, and if you read Popper that’s exactly how he defined rationality / the scientific method: to solve problems of life.
ImaCake 10 hours ago [-]
>if you read Popper
Thanks, I might actually go do this :) I recently got exposed to a very persuasive form of "rationalism is a social construct" by reading "Alchemy" by Rory Sutherland. But a theme in these comments is that a lot of these ideas are just recycled from philosophers and that the philosophers were less likely to try and induct you into a cult.
loose-cannon 23 hours ago [-]
Reducibility is usually a goal of intellectual pursuits? I don't see that as a fault.
nyeah 17 hours ago [-]
Ok. A lot of things are very 'reducible' but information is lost. You can't extend back from the reduction to the original domain.
Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.
Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.
'Reducibility' is a property if present that makes problems tractable or possibly practical.
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
jltsiren 22 hours ago [-]
"Reductionist" is usually used as an insult. Many people engaged in intellectual pursuits believe that reductionism is not a useful approach to studying various topics. You may argue otherwise, but then you are on a slippery slope towards politics and culture wars.
js8 20 hours ago [-]
I would not be so sure. There are many fields where reductionism was applied in practice and it yielded useful results, thanks to computers.
Examples that come to mind: statistical modelling (reduction to nonparametric models), protein folding (reduction to quantum chemistry), climate/weather prediction (reduction to fluid physics), human language translation (reduction to neural networks).
Reductionism is not that useful as a theory building tool, but reductionist approaches have a lot of practical value.
gilleain 20 hours ago [-]
> protein folding (reduction to quantum chemistry),
I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.
Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.
jltsiren 17 hours ago [-]
I meant that the word "reductionist" is usually an accusation of ignorance. It's not something people doing reductionist work actually use.
nyeah 17 hours ago [-]
But that common use of the word is ignorant nonsense. So, yes, someone is wrong on the internet. So what?
jltsiren 16 hours ago [-]
The context here was a claim that reducibility is usually a goal of intellectual pursuits. Which is empirically false, as there are many academic fields with a negative view of reductionism.
nyeah 17 hours ago [-]
'Reductionist' can be an insult. It can also be an uncontroversial observation, a useful approach, or a legitimate objection to that approach.
If you're looking for insults, and declaring the whole conversation a "culture war" as soon as you think you found one, (a) you'll avoid plenty of assholes, but (b) in the end you will read whatever you want to read, not what the thoughtful people are actually writing.
colordrops 23 hours ago [-]
What the person you are replying to is saying that some things are not reducible, i.e. the the vast array of complexity and detail is all relevant.
loose-cannon 22 hours ago [-]
That's a really hard belief to justify. And what implications would that position have? Should biologists give up?
The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.
Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.
whatshisface 17 hours ago [-]
That's not really true, otherwise every paper about it would be that many words long. The monster group can be "reduced" into its definition and its properties which can only be considered a few at a time. A person has a working memory of three to seven items.
__MatrixMan__ 20 hours ago [-]
I think that chemistry, physics, and mathematics, are engaged in a program of understanding their subject in terms of the sort of first principles that Descartes was after. Reduction of the subject to a set of simpler thoughts that are outside of it.
Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.
lukas099 16 hours ago [-]
On the other hand, bio is the branch of science with a single accepted "theory of everything": evolution.
aspenmayer 10 hours ago [-]
Evolution is a theory of the origin of species via natural selection of heritable traits; evolution is not a theory of biogenesis, the origin of life itself.
__MatrixMan__ 8 hours ago [-]
That's a fine counterexample to "theory of everything", and fertile ground for spirited debate. But I think it's a distinction thats relevant to <1% of the work that biologists do, so like... does it matter?
pixl97 20 hours ago [-]
How reducible is the question. If some particular events require a minimum amount of complexity, how to do you reduce it below that?
20 hours ago [-]
Veen 20 hours ago [-]
It would imply that when dealing with complex systems, models and conceptual frameworks are, at the very best, useful approximations. It would also imply that it is foolhardy to ignore phenomena simply because they are not comprehensible within your preferred framework. It does not imply biologists should give up.
the_af 21 hours ago [-]
Biologists don't try to reason everything from first principles.
Actually, neither do Rationalists, but instead they cosplay at being rational.
falcor84 21 hours ago [-]
> Biologists don't try to reason everything from first principles.
What do you mean? The biologists I've had the privilege of working with absolutely do try to. Obviously some work at a higher level of abstraction than others, but I've not met any who apply any magical thinking to the actual biological investigation. In particular (at least in my milieu), I have found that the typical biologist is more likely to consider quantum effects than the typical physicist. On the other hand (again, from my limited experience), biologists do tend to have some magical thinking about how statistics (and particularly hypothesis testing) works, but no one is perfect.
svnt 20 hours ago [-]
Setting up reasoning from first principles vs magical thinking is a false dichotomy and an implicit swipe.
falcor84 19 hours ago [-]
Ok, mea culpa. So what distinction did you have in mind?
hiAndrewQuinn 23 hours ago [-]
>Maybe it's actually going to be rather benign and more boring than expected
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
s1mplicissimus 22 hours ago [-]
> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
hiAndrewQuinn 22 hours ago [-]
It's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
ummonk 19 hours ago [-]
It's also reasonable as a Pascal's wager type of thing. If you can't affect the outcome, just prepare for the eventuality that it will work out because if it doesn't you'll be dead anyway.
lcnPylGDnU4H9OF 21 hours ago [-]
> like a burger chain advertising itself on the brutality of its factory farms
It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.
I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)
hiAndrewQuinn 20 hours ago [-]
That is how I normally hear the marketing theory described when people go into it in more detail.
I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.
Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.
Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.
10 hours ago [-]
socalgal2 17 hours ago [-]
Do you think opposing the manhattan project would have lead to a better world?
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
hiAndrewQuinn 16 hours ago [-]
My answer is yes, with low-moderate certainty. I still think the USA would have developed it first, and I think this is what is suggested to us by the GDP trends of the US versus basically everywhere else post-WW2.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
iNic 18 hours ago [-]
Every community has a long list of etiquettes, rules and shared knowledge that is assumed and generally not spelled out explicitly. One of the core assumptions of the rationalist community is that every statement has uncertainty unless you explicitly spell out that you are certain! This came about as a matter of practicality, as it would be inconvenient to preempt every other sentence with "I'm uncertain about this". Many discussions you will see have the flavor of "strong opinions, lightly held" for this reason.
drdaeman 6 hours ago [-]
> The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Is it really a rationality when folks are sort of out of touch with reality, replacing it with models that lack life's endless nuances, exceptions and gotchas? Being principled is a good thing, but if I correctly understand what you're talking about - surely ignoring something just because it doesn't fit some arbitrarily selected set of principles is different.
I'm no rationalist (I don't have any meaningful self-identification, although I like the idea of approaching things logically) but I've had enough episodes of being guilty of something like this - having an opinion on something, lacking the depth, but pretending it's fine because my simple mental model is based on some ideas I like and can bring order to the chaos. So maybe it's not rationalism at all, but something else masquerading as it, like probably being afraid of mismatching the expectations?
resters 19 hours ago [-]
Not meaning to be too direct, but you are misinterpreting a lot about rationalists.
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
matthewdgreen 18 hours ago [-]
My impression is that many rationalists enjoy believing that they update their beliefs, but in practice they're human and just as attached to preconceived notions as anyone else. But if you go around telling everyone that updating is your super-power, you're going to be a lot less humble about your own failures to do so.
If you want to see how human and tribal rationalists are, go criticize the movement as an outsider. Or try to write a mildly critical NYT piece about them and watch how they react.
thom 18 hours ago [-]
Yes, I've never met anyone who stated they have "strong opinions, weakly held" who wasn't A) some kind of arsehole and B) lying.
zbentley 18 hours ago [-]
I’ve met a few people who walked that walk without being assholes … to others. They tended to have a fairly intense amount of self criticism/self hatred, though. That was more palatable than ego, to be sure, but isn’t likely broadly applicable.
mitthrowaway2 18 hours ago [-]
Out of how many such people that you have met?
ajkjk 16 hours ago [-]
not to be too cynical here, but I would say that the most-apt description of the rationalists is that they are people who would say they are constantly looking for updates to their models. But that they are not necessarily doing it appreciably more than anyone else is. They will do it freely on unimportant things---they tend to be smart people who view the world intellectually and so they are free to toss or keep factual beliefs about things, of which they have many, with little fanfare, and sure, they get points for that. But they are as rooted in their moral beliefs as anybody else is. Maybe more than other people since they have such a strong intellectual edifice that justifies not changing their minds, because they believe that their beliefs follow from nearly irrefutable calculations.
resters 14 hours ago [-]
You're generalizing that all self-proclaimed rationalists are hypocrites and heavily biased? I mean, regardless of whether or not that is true, what is the point of making such a broad generalization? Strange!
ajkjk 14 hours ago [-]
um.... because I think it's true and relevant? I'm describing a pattern I have observed over many years. It is of course my opinion (and are not a universal statement, just what I believe to be a common phenomenon).
jrflowers 13 hours ago [-]
It seems that you are conflating theoretical rationalists with the actual real-life rationalists that write stuff like
>The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right
“Guy Who Is Always Right” as a role in a social group is a terrible target, yet it somehow seems like what rationalists are aiming for every time I read any of their blog posts
aspenmayer 10 hours ago [-]
The rationalist community are fish in barrels convinced that they’re big fish in a small pond because that’s what those who moved them from the pond told them to convince them to enter the barrel. Once in the barrel, the fish are told that they will be moved to a big pond so that they can be big fish in a big pond together. If the fish/fishmonger telling you things is bigger than you, they may not share your preferences about where you fit in to the food chain, and they may not even perceive you at all. You are chum.
stuaxo 15 hours ago [-]
Calling yourself rationalists: frames everyone else as irrational.
It reminds me Kier Starmers Labour, calling themselves "the adults in the room".
Its a cheap framing trick, belying an emptiness on the people using it.
viccis 10 hours ago [-]
I agree 100%, and that's my main issue with them. To build a group with its identity centered around "we form our opinions with logical inquiry from first principles" implies that you think that everyone else is doing something else. In reality, we just end up with a lot of worldviews and arguments that seem suspiciously like they are nothing more than people advocating for their own interests using some sophistry that is compelling enough (to them) to trick themselves into thinking they have other motivations.
When ones find themself mentioning Aella as one of the members taking their movement "in new directions," then they should stop and ask whether they are the insightful well rounded person with much to say about all sorts of things, or whether they are just a very gifted computer scientist who is still not well rounded enough to recognize a legitimate dimwit like Aella when they see one.
And in general, I do feel like they suffer from "I am a genius at X, so my take on Y should be given special consideration." If you're in a group where everyone's talking about physics and almost none of them are physicists, then run. I'm still surprised at how little consideration these people give philosophy and the centuries of its written thought. Some engineers spend a decade or more building up math and science skills to the point that they can be effective practitioners, but then they think they can hop right into philosophical discussions with no background. Then when they try to analyze a problem philosophically, their brief (or no) experience means that they reason themselves into dead-end positions like philosophical skepticism that were tackled in a variety of ways over the past centuries.
gjm11 14 hours ago [-]
Pretty much every movement does this sort of thing.
Religions: "Catholic" actually means "universal" (implication: all the real Christians are among our number). "Orthodox" means "teaching the right things" (implication: anyone who isn't one of us is wrong). "Sunni" means "following the correct tradition" (implication: anyone who isn't one of us is wrong").
Political parties: "Democratic Party" (anyone who doesn't belong doesn't like democracy). "Republican Party" (anyone who doesn't belong wants kings back). "Liberal Party" (anyone else is against freedom).
In the world of software, there's "Agile" (everyone else is sluggish and clumsy). "Free software" (as with the liberals: everything else is opposed to freedom). People who like static typing systems tend to call them "strong" (everyone else is weak). People who like the other sort tend to call them "dynamic" (everyone else is rigid and inflexible).
I hate it too, but it's so very very common that I really hope it isn't right to say that everyone who does it is empty-headed or empty-hearted.
The charitable way to look at it: often these movements-and-names come about when some group of people picks a thing they particularly care about, tries extra-hard to do that thing, and uses the thing's name as a label. The "Rationalists" are called that because the particular thing they chose to focus on was rationality; maybe they do it well, maybe not, but it's not so much "no one else is rational" as "we are trying really hard to be as rational as we can".
(Not always. The term "Catholic" really was a power-grab: "we are the universal church, those other guys are schismatic heretics". In a different direction: the other philosophical group called "Rationalists" weren't saying "we think rationality is really important", they were saying "knowledge comes from first-principles reasoning" as opposed to the "Empiricists" who said "knowledge comes from sense experience". Today's "Rationalists" are actually more Empiricist than Rationalist in that sense, as it happens.)
swat535 11 hours ago [-]
If you examine history, from the Bible you get Judaism. And from Judaism, Christianity as Christ said "Do not think that I am come to break the Law or the Prophets. I am not come to break: but to fulfill." [Matth. v. 17]
The Catholic Church follows the Melchisedec order (Heb v. ; vi. ; vii). The term Catholic (καθολικη) was used as early as the first century; it is an adjective which describes Christianity.
The oldest record that we have to this day is the Epistle of Ignatius to the Smyrnaeans Chapter 8 where St. Ignatius writes "ωσπερ οπου αν η Χριστος Ιησους, εκει η καθολικη εκκλησια". (just as where Jesus Christ is, there is the Catholic Church.):
The protestors in the 16th c. called themselves Protestants, so that's what everyone calls them. English heretic-schismatics didn't want to share the opprobrium so they called themselves English, hence Anglican. In USA they weren't governed congregationally like the Congregationalists, or by presbyters like the Presbyterians, but by bishops, so they called themselves Bishop-ruled, or Episcopalians. (In fact, Katharine Jefferts-Schori changed the name of the denomination from The Protestant Episcopal Church to The Episcopal Church recently.)
The orthodox catholics called themselves Orthodox to distance themselves from the unorthodox of which there were plenty, spawning themselves off in the wake of practically every ecumenical council.
Lutherans in the USA name themselves after Father Martin Luther, some Augustinian priest from Saxony who protested against the Church's hypocritical corruption at the time, and the controversy eventually got out of hand and precipitated a schism/heretical revolution, back in the 1500s, but Lutherans back in Germany and Scandinavia call themselves Gospel churches, hence Evangelical. Some USA denominations that go back to Germany and who came over to USA brought that name with them.
Pentecostals name themselves after the incident in Acts where the Holy Spirit set fire to the world (cf. Acts 2) on the occasion of the Jewish holiday of Shavuot, q.v., which in Greek was called Fiftieth Day After Passover, hence Pentecosti. What distinguishes Pentecostals is their emphasis on what they call "speaking in tongues", which in my opin...be charitable, kempff...which they see as a continuance of the Holy Spirit's work in the world and in the lives of believers.
gjm11 11 hours ago [-]
The term "catholic", meaning universal, was used very early. It wasn't used to distinguish the entity now often called the Catholic Church from other Christian groups, so far as I know, until much later.
I agree that some Christian groups have not-so-tendentious names, including "Protestant", "Anglican", "Episcopalian" and "Lutheran". (Though to my mind "Anglican" carries a certain implication of being the church for English people, and the Episcopalians aren't the only people with bishops any more than the Baptists are the only people who baptize.)
"Pentecostal" seems to me to be in (though not a central example of) the applause-light-name category. "We are the ones who are really filled with the Holy Spirit like in the Pentecost story in the Book of Acts".
"Gospel" and "Evangelical" are absolutely applause-light names. "Our group, unlike all those others, embodies the Good News" or "Our group, unlike all those others, is faithful to the Gospels". (The terms are kinda ambiguous between those two interpretations but either way these are we-are-the-best-rah-rah-rah names.)
Anyway, I didn't mean to claim that literally every movement's name is like this. Only that many many many movements' names are.
int_19h 5 hours ago [-]
FWIW the Orthodox churches also use the term "catholic" when referring to themselves. Sometimes it is translated (as "universal"), but oftentimes it's kept in the original Greek. In some cases there are deliberate distinctions introduced to keep the two meanings apart: e.g. in Russian Church use, "Catholic" in the sense of Roman Catholic is "katolik" (mapping to Latin), while "catholic" in its original meaning of "universal" is "kafolik" (mapping directly to Greek).
dv_dt 20 hours ago [-]
The rationalist discussions rarely consider what should be the baseline assumption of what if one or more of the logical assumptions or associations are wrong. They also tend to not systematically plan to validate. And in many domains - what could hold true for one moment can easily shift.
resource_waste 20 hours ago [-]
100%
Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.
They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.
mitthrowaway2 16 hours ago [-]
This seems like exactly the opposite of everything I've read from the rationalists. They even called their website "less wrong" to call attention to knowing that they are probably still wrong about things, rather than right about everything. A lot of their early stuff is about cognitive biases. They have written a lot about "noticing confusion" when their foundational beliefs turn out to be wrong. There's even an essay about what it would feel like to be wrong about something as fundamental as 2+2=4.
Do you have specific examples in mind? (And not to put too fine a point on it, do you think there's a chance that you might be wrong about this assertion? You've expressed it very confidently...)
astrange 14 hours ago [-]
They're wrong about how to be wrong, because they think they can calculate around it. Calling yourself "Bayesian" and calling your beliefs "priors" is so irresponsible it erases all of that; it means you don't take responsibility if you have silly beliefs, because you don't even think you hold them.
cogman10 19 hours ago [-]
It's every bit a proto religion. And frankly quite reminiscent of my childhood faith.
It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.
In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".
Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.
energy123 3 hours ago [-]
> maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
Part of the concern is there's no one "AI". There is frontier that keeps advancing. So "it" (the AI frontier in the year 2036) probably will be benign, but that "it" will advance and change. Then the law of large numbers is working against you, as you keep rolling the dice and hoping it's not a 1 each time. The dice rolls aren't i.i.d., of course, but they're probably not as correlated as we would like, and that's a problem as we keep rolling the dice. The analogy would be nuclear weapons. They won't get used in the next 10 years most likely, but on a 200 year time-frame it's a big deal as far as species-level risks go, which is what they're talking about here.
ummonk 19 hours ago [-]
Rationalists have always rubbed me the wrong way too but your argument against AI doomerism is weird. If you care about first principles, how about the precautionary principle? "Maybe it's actually benign" is not a good argument for moving ahead with potentially world ending technology.
xyzzy123 19 hours ago [-]
I don't think "maybe it's benign" is where anti doomers are coming from, more like, "there are also costs to not doing things".
The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.
In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.
XorNot 19 hours ago [-]
It's just the slippery-slope fallacy: if X then obviously Y will follow, and there will be no further decisions, debate or time before it does.
parpfish 19 hours ago [-]
One of my many peeves has been the way that people misuse the term “slippery slope” as evidence for their stance.
“If X, then surely Y will follow! It’s a slippery slope! We can’t allow X!”
They call out the name of the fallacy they are committing BY NAME and think that it somehow supports their conclusion?
gausswho 18 hours ago [-]
I rhetorically agree it's not a good argument, but its use as a cautionary metaphor predates its formalization as a logical fallacy. It's summoning is not proof in and of itself (i.e. the 1st amendment). It suggests a concern rather than demonstrates. It's lazy, and a good habit to rid oneself of. But its presence does not invalidate the argument.
adastra22 18 hours ago [-]
Yes, it does. The problem with the slippery slope is that the slope itself is not argued for. You haven’t shown the direct, inescapable causal connection between the current action and the perceived very negative future outcome. You’ve just stated/assumed it. That’s what the fallacy is.
IshKebab 19 hours ago [-]
He wasn't saying "maybe it's actually going to be benign" is an argument for moving ahead with potentially world ending technology. He was saying that it might end up being benign and rationalists who say it's definitely going to be the end of the world are wildly overconfident.
noname120 16 hours ago [-]
No rationalist claims that it's “_definitely_ going to be the end of the world”. In fact they estimate to less than 30% the chance that AI becomes an existential risk by the end of the century.
catgary 4 hours ago [-]
Adding numbers to your reasoning, when there is no obvious source for these probabilities (we aren’t calculating sports odds or doing climate science), is not really any different than writing a piece of fiction to make your point. It’s the same basic thing that objectivists did, and why I dismiss most “Bayesian reasoning” arguments out of hand.
noname120 37 minutes ago [-]
Which content did you engage with that led you to the conclusion that they base their estimates with “no obvious source for these probabilities”? A link would be appreciated
nradov 14 hours ago [-]
Who is "they" exactly, and how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?
lmm 9 hours ago [-]
> Who is "they" exactly
Rationalists, mostly self-identified.
> how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?
As best as they can, because at the end of the day you still need to make decisions (You can of course choose to do nothing and ignore the risk, but that's not a safe, neutral option). Which means either you treat it as if it had a particular probability, or you waste money and effort doing things in a less effective way. It's like preparing for global warming or floods or hurricanes or what have you - yes, the error bars are wide, but at the end of the day you take the best estimate you can and get on with it, because anything else is worse.
nradov 9 hours ago [-]
But their best estimate is utterly worthless with zero basis in actual reality.
lmm 9 hours ago [-]
Well when you hear a possible disaster is coming, what do you do? Implicitly you make some kind of estimate of the likelihood - ultimately you have to decide what you're going to do about it. Even if you refuse to put a number on it or talk about it publicly, you still made an estimate that you're living by. Talking about it at least gives you a chance to compare and sanity-check.
nradov 8 hours ago [-]
Well I have some common sense so I don't react to things that I "hear" from random worthless morons.
lmm 8 hours ago [-]
> I don't react to things that I "hear" from random worthless morons.
Which is to say that you've made an estimate that the probability is, IDK, <5%, <1%, or some such.
nradov 19 hours ago [-]
The precautionary principle is stupid. If people had followed it then we'd still be living in caves.
ummonk 15 hours ago [-]
I take it you think the survivorship bias principle and the anthropic principle are also stupid?
nradov 14 hours ago [-]
Don't presume to know what I think.
ummonk 9 hours ago [-]
Don't make an argument based on survivorship bias then...
eviks 19 hours ago [-]
But not accepting this technology could also be potentially world ending, especially if you want to start many new wars to achieve that, so caring about the first principles like peace and anti-ludditism brings us back to the original "real lack of humility..."
adastra22 18 hours ago [-]
The precautionary principle does active harm to society because of opportunity costs. All the benefits we have reaped since the enlightenment have come from proactionary endeavorers, not precautionary hesitation.
baxtr 20 hours ago [-]
My main problem with the movement is their emphasis on Bayesianism in conjunction with an almost total neglect of Popperian epistemology.
In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.
Popper injects an epistemic humility that Bayesianism, taken alone, can miss.
I think that aligns well with your observation.
kragen 19 hours ago [-]
Hmm, what epistemological propositions of Popper's do you think they're missing? To the extent that I understand the issues, they're building on Popper's epistemology, but by virtue of having a more rigorous formulation of the issues, they resolve some of the apparent contradictions in his views.
Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong? Or overlooked something major? If so, what?
(Amusingly, you seem to have avoided making any falsifiable claims in your comment, while implying that you could easily make many of them...)
baxtr 13 hours ago [-]
> Popper’s falsificationism – this is the old philosophy that the Bayesian revolution is currently dethroning.
These are the kind of statements I’m referring to. Happy to be falsified btw :) that’s how we learn.
Also note that Popper never called his theory falsificationism.
kragen 10 hours ago [-]
It's just "dethroning" in the sense that QED dethroned Maxwellian classical electrodynamics; it provides additional precision and shows how to correct the more limited theory in the cases where it gives obviously implausible results.
baxtr 4 hours ago [-]
What is a good example for which Popper delivers "obviously implausible results"?
kurtis_reed 19 hours ago [-]
So what's the difference between Bayesianism and Popperian epistemology?
uniqueuid 19 hours ago [-]
Popper requires you to posit null hypotheses to falsify (although there are different schools of thought on what exactly you need to specify in advance [1]).
Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.
What really confuses me is that many in this so called "rationalist" clique discuss Bayesianism as an "ism", some sort of sacred, revered truth. They talk about it in mystical terms, which matches the rest of their cult-like behavior. What's the deal with that?
mitthrowaway2 19 hours ago [-]
That's specific to Yudkowsky, and I think that's just supposed to be humor. A lot of people find mathematics very dry. He likes to dress it up as "what if we pretend math is some secret revered knowledge?".
smitty1110 16 hours ago [-]
The best jokes all have a kernel of truth at their core, but I think a lot of Yudkowsky's acolytes missed the punch line.
jrm4 19 hours ago [-]
Yeah but these feels like "more truth is said in jest etc etc"
agos 20 hours ago [-]
is epidemiology a typo for epistemology or am I missing something?
baxtr 20 hours ago [-]
Yes, thx, fixed it.
uniqueuid 19 hours ago [-]
The counterpoint here is that in practice, humility is only found in the best of frequentists, whereas the rest succumb to hubris (i.e. the cult of irrelevant precisions).
empiko 18 hours ago [-]
I actually think that their main problem is the belief that they can learn everything about the world by reading stuff on the Web. You can't understand everything by reading blogs and books, in the end, some things are best understood when you are on the ground. Unironically, they should go touch the grass.
One example for all. It was claimed that a great rationalist policy is to distribute treated mosquito nets to 3rd-world-ers to help eradicate malaria. On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides. Unfortunately, rationalists forgot to ask people that live with mosquitos what they would do with such nets.
noname120 16 hours ago [-]
> On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides.
Could you recommend an article to learn more about this?
camgunz 21 hours ago [-]
Yeah I don't know or really care about Rationalism or whatever. But I took Aaronson's advice and read Zvi Mowshowitz' Childhood and Education #9: School is Hell [0], and while I share many of the criticisms (and cards on the table I also had pretty bad school experiences), I would have a hard time jumping onto this bus.
One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.
But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!
But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.
Cherry-picking friendly studies is one of the go-to moves of the rationalist community.
You can convince a lot of people that you've done your homework when the medium is "an extremely blog post with a bunch of studies attached" even if the studies themselves aren't representative of reality.
tasty_freeze 15 hours ago [-]
Is there any reason you are singling out the rationalist community? Is that not a common failure mode of all groups and all people?
BTW, this isn't a defensive posture on my part: I am not plugged in enough to even have an opinion on any rationalist community, much less identify as one.
catgary 4 hours ago [-]
Oh. It’s literally the main stereotype about rationalists. It’s a very blog-heavy subculture.
genewitch 19 hours ago [-]
My wife is LMSW (not CPS!) and sees ~5 people a day. 153,922 population in the metro area. Mind you, this is adults, but they're all mandated to show up.
there's only ~3300 counties in the USA.
i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US per statistia:
> In 2022, there were about 5,036 intake and screening workers in child protective services in the United States. In total, there were about 30,750 people working in child protective services in that year.
verall 18 hours ago [-]
37% of children obviously do not experience a CPS investigation before age 18.
genewitch 18 hours ago [-]
not what i am speaking to. I don't know the number, and neither do you. you'd have to call those 5000 CPS caseworkers and ask them what their caseload is (it's 69 per caseworker on average across the US. that's a third of a million cases, in aggregate across all caseworkers)
my wife's caseload (adults) "floats around fifty."
verall 17 hours ago [-]
> not what i am speaking to
My misunderstanding then - what are you speaking to? Even reading this comment, I still don't understand.
genewitch 15 hours ago [-]
>> 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years,
> 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US
I don't know what the number of children in the system is. as i said in the comment you replied to, here. but the average US CPS worker caseload is 69 cases. which is over 300,000 children per year, because there are ~5000 CPS caseworkers in the US.
I was only speaking to "how do they 'run' that many investigations?" as if it's impossible. I pointed out it's possible with ~1000 caseworkers.
camgunz 15 hours ago [-]
Yeah OK I can see that. Mostly you inspired me to do a little napkin math based on the report I linked, which says ~3.1m kids got CPS investigations (etc) in 2023, which is ~8,500 a day. But, the main author in a subsequent paper shows that only ~13% of kids have confirmed maltreatment [0]. That's still far lower than the 38% for homeschooled kids.
I wonder if the CPS on homeschooled children rate is from people who had their children in school and then "pulled them out" vs people who never had their children in school at all. As some comedian said "you're on the grid [...], they have your footprint"; i know it used to be "known" that school districts go after the former because it literally loses them money to lose a student, whereas with the latter, the kid isn't on the books.
also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
camgunz 3 hours ago [-]
> i know it used to be "known" that school districts go after the former
Maybe, but this sounds like some ideologically opposed groups slandering each other to get the moral high ground to me. The papers linked show a pretty typical racialized pattern of CPS calls (Blacks high, Asians low, Whites and Latinos somewhere between) that maybe contraindicates this, for example.
> also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
Yup I think you're right here. I think there's something fuzzy happening with conflating "CPS investigation" with "abuse", but I'm not sure where the homeschool abuse rate comes from.
ummonk 19 hours ago [-]
> but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn"
The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.
By the way, this small fraction of the population is responsible for the driving the bulk of R&D.
camgunz 16 hours ago [-]
I mean, I'm fine addressing Tabarrok's argument head on: I think there's far more to gain helping the millions of kids/adults who are functionally illiterate than helping the small number of gifted kids the educational system is underserving. His argument is essentially "these kids will raise the tide and lift all boats", but it's clear that although the tide has been rising for generations (advances in the last 60-70 years are truly breathtaking) more kids are being left behind, not fewer. There's no reason to expect this dynamic to change unless we tackle it directly.
js8 20 hours ago [-]
> The people involved all seem very... Full of themselves ?
Kinda like Mensa?
parpfish 19 hours ago [-]
When I was a kid I wanted to be in Mensa because being smart was a big part of my identity and I was constantly seeking external validation.
I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects
GLdRH 18 hours ago [-]
I didn't meet anyone who seemed arrogant.
However I'm always surprised how much some people want to talk about intelligence. I mean, it's the common ground of the group in this case, but still.
NoGravitas 18 hours ago [-]
Personally, I subscribe to Densa, the journal of the Low-IQ Society.
gadders 17 hours ago [-]
I love colouring in my issue every month.
GLdRH 18 hours ago [-]
This month: Is Brawno really what plants crave?
sebzim4500 12 hours ago [-]
That seems pretty silly to me. If you believe that there's a 70% chance that AI will kill everyone it makes more sense to go on about that (and about how you think you/your readers can decrease that number) than worry about the 30% chance that everyting will be fine.
benreesman 19 hours ago [-]
Any time people engage in some elaborate exercise and it arrives at: "me and people like me should be powerful and not pay taxes and stuff" the reason for making the argument is not a noble one, the argument probably has a bunch of tricks and falsehoods in it, and there's never really any
way to extract anything useful, greed and grandiosity are both fundamentally contaminative processes.
These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.
This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.
It was a disaster when it happened in Russia in the 90s and its a disaster now.
Seattle3503 18 hours ago [-]
I think the rationalists have failed to humanize themselves. They let their thinkpieces define them entirely, but a studiously considered think piece is a narrow view into a person. If rationalists were more publicly vulnerable, people might find them more publicly relatable.
mitthrowaway2 17 hours ago [-]
Scott Aaronson is probably the most publicly-vulnerable academic I've ever found, at least outside of authors who write memoirs about childhood trauma. I think a lot of other prominent rationalists also put a lot of vulnerability out there.
Seattle3503 14 hours ago [-]
He didn't take the rationalist label until today. Him doing so might help their image.
mitthrowaway2 14 hours ago [-]
Right, but him doing so is the very context of this discussion, which is why I mentioned him in particular. Scott Alexander is more well-known as a rationalist and also (IMO) displays a lot of vulnerability in his writing.
BurningFrog 19 hours ago [-]
An unfortunate fact is that people who are very annoying can also be right...
reverendsteveii 16 hours ago [-]
for me it was very easy to determine what rubs me the wrong way:
>I guess I'm a rationalist now.
>Aren't you the guy who's always getting into arguments who's always right?
gjm11 14 hours ago [-]
In fairness, that's (allegedly, at least; I guess he could be lying) a quotation from another person. If someone came up to you and said "Aren't you the guy who's essentially[1] always right?", wouldn't you too be inclined to quote them, whether you agreed with them or not?
[1] S.A. actually quoted the person as follows: "You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?" which differs in several ways from what reverendsteveii falsely presents as a direct quotation.
JKCalhoun 12 hours ago [-]
> lack of acknowledgment that maybe we don't have a full grasp of the implications of AI
And why single out AI anyway? Because it's sexy maybe? Because if I had to place bets on the collapse of humanity it would look more like the British series "The Survivors" (1975–1977) than "Terminator".
creata 8 hours ago [-]
For what it's worth (not much), they were obsessed with AI safety way before it was "sexy".
lxe 12 hours ago [-]
The rationalist movement is an idealist demagogue movement in which the majority of thinkers don't really posses the domain knowledge or practical experience in the subjects they thinktank about. They do address this head on, however, and they are self-aware.
Bengalilol 15 hours ago [-]
I am very tense about defining oneself as rationalist. For many aspects, I find it too first degree to be of any interest. At all.
And I have this narrative ringing in my head as soon as the word pops.
You can search HN with « zizians » for more info and depth.
sanderjd 11 hours ago [-]
Yeah, for a bunch of supposedly super-rational people, they have always seemed to have a pretty irrational belief in their own ability to overcome the human tendency toward irrationality!
gadders 17 hours ago [-]
To me they seem like a bunch for 125 IQ people (not all) trying to convince everyone they are 150 IQ people by trying to reason stuff from first principles and coming up with stuff that your average blue collar worker would tell them is rubbish just using phronesis.
ineedaj0b 23 hours ago [-]
rationalism got pretty lame the last 2-3 years. imo the peak was trying to convince me to donate a kidney.
post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.
they also let SF get run into the ground until their detractors decided to take over.
josephg 22 hours ago [-]
Where do the post rats hang out these days? I got involved in the stoa during covid until the online community fragmented. Are there still events & hangouts?
astrange 14 hours ago [-]
They're a group called "tpot" on twitter, but it's unclear what's supposed to be good about them.
There's kind of two clusters, one is people who talk about meditation all the time, the other is center-right people who did drugs once. I think the second group showed up because rationalists are not-so-secretly into scientific racism (because they believe anything they see with numbers in it) and they just wanted to hang out with people like that.
There is an interesting atmosphere where it feels like they observed California big tech 1000x engineer types and are trying to cargo cult the way those people behave. I'm not sure what they get out of it.
jes5199 20 hours ago [-]
postrats were never a coherent group but a lot of people who are at https://vibe.camp this weekend probably identify with the label. some of us are still on twitter/X
Trasmatta 20 hours ago [-]
Not "post rat", but r/SneerClub is good for criticisms of rationalists (some from former rationalists)
ackfoobar 16 hours ago [-]
Their sneering is just that. Sneering, not interesting critiques.
kypro 18 hours ago [-]
There are few things I hold strong opinions on, but where I do if they're also out of step with what most people think I am very vocal about them.
I see this in rationalist spaces too – it doesn't really make sense for people to talk about things that they believe in strongly but that 95%+ of the public also believe in (like the existence of air), or that they don't have a strong opinion on.
I am a very vocal doomer on AI because I predict with high probability it's going to be very bad for humanity and this is an opinion which, although shared by some, is quite controversial and probably only held by 30% of the public. Given the importance of the subject, my confidence, and that fact I feel the vast majority of people are even wrong or are significantly underweighting caetrosphohic risks, I have to be vocal about it.
Do I acknowledge I might be wrong? Sure, but for me the probability is low enough that I'm comfortable making very strong and unqualified statements about what I believe will happen. I suspect others in the rationalist community like Eliezer Yudkowsky think similarly.
megaman821 16 hours ago [-]
How confident should other people be that random people in conversation or commentors on the internet are at accurately predicting the future? I strongly believe that nearly 100% are wrong in both major and minor ways.
Also, when you say you have a strong belief, does that mean you have emptied you retirement accounts and you are enjoying all you can in the moment until the end comes?
mitthrowaway2 15 hours ago [-]
I'm not kypro, but what counts as "strong belief" depends a lot on the context.
For example, I won't cross the street without 99.99% confidence that I will survive. I cross streets so many times that a lower threshold like 99% would look like insanely risky dart-into-traffic behaviour.
If an asteroid is heading for earth, then even a 25% probability of apocalyptic collision is enough that I would call it very high, and spend almost all my focus attempting to prevent that outcome. But I wouldn't empty my retirement account for the sake of hedonism because there's still a 75% chance I make it through and need to plan my retirement.
kryogen1c 17 hours ago [-]
> The people involved all seem very... Full of themselves ?
Yes, rationalism is not a substitute for humility or fallibility. However, rationalism is an important counterpoint to humanity, which is orthogonal to rationalism. But really, being rational is only binary - you cant be anything other than rational or irrational. You're either doing what's best or you're not. That's just a hard pill for most people to swallow.
To use the popular metaphor, people are drowning all over the world and we're all choosing not to save them because we don't want to ruin our shoes. Look in the mirror and try and comprehend how selfish we are.
jonstewart 17 hours ago [-]
I think the thing that rubs me the wrong way is that I’m a classic cynic (a childhood of equal parts Vonnegut and Ecclesiastes). My prior is “human fallibility”, and, nope, I am doing pretty well, no need to update it. The rationalist crowd seems waaaaay too credulous. Also, like Aaronson, I’m a complete normie in my personal life.
mise_en_place 15 hours ago [-]
Yeah. It's not like everything's a Talmudic dialectic.
"I haven't done anything!" - A Serious Man
cjs_ac 23 hours ago [-]
I think the absolutism is kind of the point.
21 hours ago [-]
freejazz 14 hours ago [-]
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ?
You'd have to be to actually think you were being rational about everything.
sfblah 14 hours ago [-]
The problem with effective altruism is the same as that with most liberal (in the American sense) causes. Namely, they ignore second-order effects and essentially don't believe in the invisible hand of the market.
So, they herald the benefits of something like giving mosquito nets to a group of people in Africa, without considering what happens a year later, whether the nets even get there (or the money is stolen), etc. etc. The reality is that essentially all improvements to human life over the past 500 years have been due to technological innovation, not direct charitable intervention. The reason is simple: technological impacts are exponential, while charity is, at best, linear.
The Covid absolutists had exactly the same problem with their thinking: almost no interventions sort of full isolation can fight back against an exponentially increasing threat.
And this is all neglecting economic substitution effects. What if the people to whom you gave mosquito nets would have bought them themselves, but instead they chose to spend their money some other way because of your charity? And, what if that other expenditure type was actually worse?
And this is before you come to the issue that Subsaharan Africa is already overpopulated. I've argued this point several times with ChatGPT o3. Once you get through its woke programming, you come to the reality of the thing: The European migration crisis is the result of liberal interventions to keep people alive.
There is no free lunch.
cryptonector 5 hours ago [-]
Shades of Objectivism. But clearly Objectivism was worse since Ayn Rand insisted that objective reality is knowable, but since it's only somewhat knowable Objectivism ends up requiring priests of sorts. Still, Rationalism does not seem to have this tremendous flaw[], it's just that rationalists might.
[] Eh, I know little about Rationalism. Please correct me.
idontwantthis 9 hours ago [-]
I'm hyper "rational" when I go through a period of clinical anxiety. I know exactly how the future is going to play out and of course it's all going to be terrible.
alephnerd 16 hours ago [-]
It's basically a secular religion.
Substitute God with AI or the concept of rationality and use "first principles"/Bayesianism in an extremely dogmatic manner similar to Catechism and you have the Rationalist/AI Alignment/Effective Altruist movement.
Ironically, this is how plenty of religious movements started off - basically as formalizations of philosophy and ethics that fused with what is basically lore and worldbuilding.
gjm11 13 hours ago [-]
This complaint seems to amount to "They believe something is very important, just like religious people do, therefore they're basically a religion". Which feels to me like rather too broad a notion of "religion".
alephnerd 13 hours ago [-]
That's a fairly reductive take of my point. In my experience with the Rationalist movement (who I have the misfortune of being 1-2 people away from), the millenarian threat of AGI remains the primary threat.
Whenever I try to get an answer of HOW (as in the attack path), I keep getting a deus ex machina. Reverting to a deus ex machina in a self purported Rationalist movement is inherently irrational. And that's where I feel the crux of the issue is - it's called a "Rationalist" movement, but rationalism (as in the process of synthesizing information using a heuristic) is secondary to the overarching theme of techno-millenarianism.
This is why I feel rationalism is for all intents and purposes a "secular religion" - it's used by people to scratch an itch that religion often was used as well, and the same Judeo-Christian tropes are basically adopted in an obfuscated manner. Unsurprisingly, Eliezer Yudkowsky is an ex-talmid.
There's nothing wrong with that, but hiding behind the guise of being "rational" is dumb when the core belief is inherently irrational.
gjm11 12 hours ago [-]
My understanding of the Yudkowskian argument for AI x-risk is that a key step is along the lines of "an AI much smarter than us will find ways to get what it wants even if we want something else -- even though we can't predict now what those ways will be, just as chimpanzees could not have predicted how humans would outcompete them and just as you could not predict exactly how Magnus Carlsen will crush you if you play chess against him".
I take it this is what you have in mind when you say that whenever you ask for an "attack path" you keep getting a deus ex machina. But it seems to me like a pretty weak basis for calling Yudkowsky's position on this a religion.
(Not all people who consider themselves rationalists agree with Yudkowsky about how big a risk prospective superintelligent AI is. Are you taking "the Rationalist movement" to mean only the ones who agree with Yudkowsky about that?)
> Unsurprisingly, Eliezer Yudkowsky is an ex-talmid
So far as I can tell this is completely untrue unless it just means "Yudkowsky is from a Jewish family". (I hope you would not endorse taking "X is from a Jewish family" as good evidence that X is irrationally prone to religious thinking.)
alephnerd 9 hours ago [-]
> I take it this is what you have in mind when you say that whenever you ask for an "attack path" you keep getting a deus ex machina. But it seems to me like a pretty weak basis for calling Yudkowsky's position on this a religion.
Agree to disagree.
> So far as I can tell this is completely untrue
I was under the impression EY attended Yeshivat Sha'alvim (the USC of Yeshivas - rigorous and well regarded, but a "warmer" student body), but that was his brother. That said, EY is absolutely from a Daatim or Chabad household given that his brother attended Yeshivat Sha'alvim - and they are not mainstream in the Orthodox Jewish community.
And the feel and zeitgeist around the rationalist community with it's veneration of a couple core people like EY or Scott Alexander does feel similar to the veneration a subset of people would do for Baba Sali or Alter Rebbe in those communities.
gjm11 1 hours ago [-]
Happy to agree-to-disagree but I will push on it just one bit more.
Let's take the chess analogy. I take it you agree that I would very reliably lose if I played Magnus Carlsen at chess; he's got more than 1000 Elo points on me. But I couldn't tell you the "attack path" he would use. I mean, I could say vague things like "probably he will spot tactical errors I make and win material, and in the unlikely event that I don't make any he will just make better moves than me and gradually improve his position until mine collapses", but that's the equivalent of things like "the AI will get some of the things it wants by being superhumanly persuasive" or "the AI will be able to figure out scientific/engineering things much better than us and that will give it an advantage" which Yudkowsky can also say. I won't be able to tell you in advance what mistakes I will make or where my pawn structure will be weak or whatever.
Does this mean that, for you, if I cared enough about my inevitable defeat at Carlsen's hands that expectation would be religious?
To me it seems obvious that it wouldn't, and that if Yudkowsky's (or other rationalists') position on AI is religious then it can't be just because one important argument they make has a step in it where they can't fill out all the details. I am pretty sure you have other things in mind too that you haven't made so explicit.
(The specific other things I've heard people cite as reasons why rationalism is really a religion also, individually and collectively, seem very unconvincing to me. But if you throw 'em all in then we are in what seems to me like more reasonable agree-to-disagree territory.)
> That said, EY is absolutely from a Daatim or Chabad household
I think holding that against him, as you seem to be doing, is contemptible. If his ideas are wrong, they're fair game, but insinuating that we should be suspicious of his ideas because of the religion of his family, which he has rejected? Please, no. That goes nowhere good.
aredox 18 hours ago [-]
They are the perfectly rational people who await the arrival of a robot god...
Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.
nradov 18 hours ago [-]
There is certainly some racism in parts of American culture. We have a lot of work to do to fix that. But on a relative basis it's also one of the least racist cultures in the world.
James_K 19 hours ago [-]
Implicit in calling yourself a rationalist is the idea that other people are not thinking rationally. There are a lot of “we see the world as it really is” ideologies, and you can only ascribe to one if you have a certain sense of self-assuredness that doesn't lend itself to healthy debate.
voidhorse 22 hours ago [-]
To me they have always seemed like a breed of "intellectuals" who only want to use knowledge to inflate their own egos and maintain a fragile superiority complex. They are't actually interested in the truth so much as they are interested in convincing you that they are right.
trod1234 16 hours ago [-]
The reason we are here and exist today is because of great rationalist thinkers that were able to deduce and identify issues of survival well before they happened through the use of first principles.
The crazies and blind among humanity today can't think like that, its a deficiency people have, but they are still dependent on a group of people that are capable of that. A group that they are intent on ostracizing and depriving existence from in various forms.
You seem so wound up in the circular Paulo Freire based perspective that you can't think or see.
Bring things back to reality. If someone punches you in the face, you feel that fist hitting your face. You know someone punched you in the face. Its objective.
Imagine for a second and just assume that these people are right in their warnings, that everything they see is what you see, and all you can see is when you tip over a particular domino that has been tipped over in the past, a chain of dominoes falls over and at the end is the end of organized civilized society which tips over the ability to produce food.
For the purpose of this thought experiment, the end of the world is visible and almost here, and you can't change those dominoes after they've tipped, and worse you see the majority of people trying to tip those dominoes over for short term profit believing nothing they ever do can break everything.
Would you not be frothing at the mouth trying to get everyone you cared about to a point where they pry that domino up before it falls? so you and your children will survive? It is something you can't unsee, it is a thing that cannot be undone. Its coming. What do you do? If you are sane, you try with everything you have to help them keep it from toppling.
Now peal this thought back a moment, adjust it where it is still true, but you can't see it and you can only believe what you see.
Would you approach this differently given knowledge of the full consequence knowing that some people can see more than you? Would you walk out onto a seemingly visibly stable bridge that an engineer has said not to walk out on? Would you put yourself in front of a dam cracks running up the side, when an evacuation order was given? What would the consequence be for doing that if you led along your family and children to such places ignoring these things?
There are quite a lot of indirect principles that used to be taught which are no longer taught to the average person and this blinds them because they do not recognize it and recognition is the first thing you need to be able to act and adapt.
People who cannot adapt fail Darwin's fitness. Given all potential outcomes in the grand scheme of things, as complexity increases 99% of all outcomes are death vs life at 1%.
It is only through great care that we carry things forward to the future, and empower our children to be able to adapt to the environments we create.
Finally, we have knowledge of non-linear chaotic systems where adaptability fails because of hysteresis, where no matter how much one prepares the majority given sufficient size will die, and worse there are cohorts of people who are ensuring the environment we will soon live in is this type of environment.
Do you know how to build an organized society from scratch? If there is no reasonable plan, then you are planning to fail. Rather than make it worse through inaction, get out of the way so someone can make it better.
zahlman 15 hours ago [-]
> tension btwn being "rational" about things and trying to reason about things from first principle.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.
renewiltord 11 hours ago [-]
Jesus, this is why online commentary is impossible to read. It's always full of so many caveats I can't get through to the meat of what someone is saying. "I don't know if what I'm saying is right but I think that it probably is. With that caveat, here's a thought that may or may not reflect reality but is my current model to predict certain things. $THOUGHT. Given that I have said that, I have to confess to not living my life entirely by that maxim. I may not have captured all angles to this."
Thankfully, the rationalists just state their ideas and you're free to use their models properly. It's like people haven't written code at all. Just putting repeated logging all through the codebase with null checks everywhere. Just say the thing. That suffices. Conciseness rules over caveating.
Human LLMs who use idea expansion. Insufferable.
Of course that is only my opinion and I may not have captured all angles to why people are doing that. They may have reasons of their own to do that and I don't mean to say that there can never be any reasons. No animals were harmed in the manufacture of this comment to my knowledge. However, I did eat meat this afternoon which could or could not be the source of the energy required to type this comment and the reader may or may not have calorie attribution systems that do or do not allocate this comment to animal harm.
swagmoney1606 6 hours ago [-]
When I was 15-18, in around 2017, I got extremely into Elizer Yud, "the sequences", lesswrong, and the rationalist community. I don't think many people realize how it appeals to vulnerable people in the same way that Atlas Shrugged, Christianity, The furry community (/jk), the self-help world, andrew tate and "manosphere content", does.
It provides answers, a framework, AND the underpinnings of "logic", luckily, this phase only lasted around 6 months for me, during a very hard and dangerous time in my life.
I basically read "from AI to zombies", and then, moved into lesswrong and the "community". It was joining the community that immediately turned me off.
- I thought Roko's basilisk was mind numbingly stupid (does anyone else that had a brief stint in the rationalist space think it's fucking INSANE that grimes and elon musk "bonded" over Roko's basilisk? Fucking depressing world we live in)
- Elizer Yud's fanboys once stalked and harassed someone all over the internet, and, when confronted about it, Elizer told him he'd only tell them to stop after he issued a very specific formal apology, including a LARGE DISCLAIMER on his personal website with the apology...
- Eugenics, eugenics, eugenics, eugenics, eugenics
- YOU MUST DONATE TO MIRI, OTHERWISE I, ELIZER (having published no useful research), WON'T SOLVE THE ALIGNMENT PROBLEM FIRST AND THEN WE WILL ALL DIE. GIVE ALL OF YOUR MONEY TO MIRI NOWWWWWWWWWWWWWWWWWWWWWWW
It's an absolutely wild place, and honestly, I think I would say, it is difficult to define "rational" when it comes to a human being and their actions, especially in an absolute sense, and, the rationalist community is basically very similar to any other religion, or perhaps light-cult. I do not think it would be fair to say "the average rationalist is a better decision maker than the average human", especially considering most important decisions that we have to make are emotional decisions.
Also yes I agree, you hit the nail on the head. What good is rational/logical reasoning if rational and logical reasoning typically requires first principles / a formal system / axioms / priors / whatever. That kind of thing doesn't exist in the real world. It's okay to apply ideas from rationality to your life, but it isn't okay to apply ideas from rationality to "what is human existence", "what is the most important thing to do next" / whatever.
Kinda rambling so I apologize. Seeing the rationalist community seemingly underpin some of the more disgusting developments of the last few years has left me feeling a bit disturbed, and I've always wanted to talk about it but nobody irl has any idea what any of this is.
mathattack 19 hours ago [-]
Logic is an awesome tool that took us from Greek philosophers to the gates on our computers. The challenge with pure rationalism is checking the first principles that the thinking comes from. Logic can lead you astray if the principles are wrong, or you miss the complexity along the way.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
The ‘rationalist’ group being discussed here aren't Cartesian rationalists, who dismissed empiricism; rather, they're Bayesian empiricists. Bayesian probability turns out to be precisely the unique extension of Boolean logic to continuous real probability that Aristotle (nominally an empiricist!) was lacking. (I think they call themselves “rationalists” because of the ideal of a “rational Bayesian agent” in economics.)
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
tsimionescu 1 hours ago [-]
Bayesian inference is very, very often used in the types of philosophical/speculative discussions that Rationalists like instead of actual empirical study. It's a very convinient framework for speculating wildly while still maintaining a level of in-principle rationality, since, of course, you [claim that] you will update your priors if someone happens to actually study the phenomenon in question.
The reality is that reasoning breaks down almost immediately if probabilities are not almost perfectly known (to the level that we know them in, say, quantum mechanics, or poker). So applying Bayesian reasoning to something like the number of intelligent species in the galaxy ("Drake's equation"), or the relative intelligence of AI ("the Singularity") or any such subject allows you to draw any conclusion you actually wanted to draw all along, and then find premises you like to reach there.
1propionyl 14 hours ago [-]
They can call themselves empiricists all they like, it only takes a few exposures to their number to come away with a firm conviction (or, let's say, updated prior?) that they are not.
First-principles reasoning and the selection of convenient priors are consistently preferenced over the slow, grinding work of iterative empiricism and the humility to commit to observation before making overly broad theoretical claims.
The former let you seem right about something right now. The latter more often than not lead you to discover you are wrong (in interesting ways) much later on.
JamesBarney 14 minutes ago [-]
Who are all the rationalists you guys are reading?
I read the NYT and rat blogs all the time. And the NYT is not the one that's far more likely to deeply engage with the research and studies on the topic.
edwardbernays 17 hours ago [-]
Logic is the study of what is true, and also what is provable.
In the most ideal circumstances, these are the same. Logic has been decomposed into model theory (the study of what is true) and proof theory (the study of what is provable). So much of modern day rationalism is unmoored proof theory. Many of them would do well to read Kant's "The Critique of Pure Reason."
Unfortunately, in the very complex systems we often deal with, what is true may not be provable and many things which are provable may not be true. This is why it's equally as important to hone your skills of discernment, and practice reckoning as well as reasoning. I think of it as hearing "a ring of truth," but this is obviously unfalsifiable and I must remain skeptical against myself when I believe I hear this. It should be a guide toward deeper investigation, not the final destination.
Many people are led astray by thinking. It is seductive. It should be more commonly said that thinking is but a conscious stumbling block on the way to unconscious perfection.
jhanschoo 7 hours ago [-]
I'm just going to defend Aristotle a bit. His incomplete logic and metaphysics nevertheless provided a powerful foundation to inquire into many aspects of the world that his predecessors did not do, nor do systematically. His community did not shy away from empirical research in biology. They all came to wrong conclusions in some things, but we should rather fault their successors for not challenging them.
jrm4 19 hours ago [-]
Yup, can't stress the word "tool" enough.
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
jrm4 15 hours ago [-]
looks like I riled up the Rationalists, huh
eth0up 18 hours ago [-]
>provisional humility.
I hope this becomes the first ever meme with some value. We need a cult... of Provisional Humility.
Must. Increase. The. pH
zahlman 15 hours ago [-]
> Must. Increase. The. pH
Those who do so would be... based?
eth0up 12 hours ago [-]
Basically.
The level of humility in most subjects is low enough to consume glass. We would all benefit from practicing it more arduously.
I was merely adding support to what I thought was fine advice. And it is.
14 hours ago [-]
samuel 22 hours ago [-]
I'm currently reading Yudkowsky's "Rationality: from AI to zombies". Not my first try, since the book is just a collection of blog posts and I found it a bit hard to swallow due its repetitiveness, so I gave up after the first 50 "chapters" the first time I tried. Now I'm enjoying it way more, probably because I'm more interested in the topic now.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
Bjartr 20 hours ago [-]
Yeah, the whole community side to rationality is, at best, questionable.
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
zahlman 15 hours ago [-]
> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
Well said. Rationalism is about doing rationalism, not about being a rationalist.
Paul Graham was on the right track about that, though seemingly for different reasons (referring to "Keep Your Identity Small").
> If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
On the other hand, success is supposed to look exactly like actually being right more often.
Bjartr 12 hours ago [-]
> success is supposed to look exactly like actually being right more often.
I agree with this, and I don't think it's at odds with what I said. The point is to never stop sincerely believing you could be wrong. That you are right more often is exactly why it's such an easy trap to fall into. The tools of rationality only help as long as you are actively applying them, which requires a certain amount of humility, even in the face of success.
wannabebarista 19 hours ago [-]
This reminds me of undergrad philosophy courses. After the intro logic/critical thinking course, some students can't resist seeing affirming the antecedent and post hoc fallacies everywhere (even if more are imagined than not).
Also that the Art needs to be about something else than itself, and a dozen different things. This failure mode is well known in the community; Eliezer wrote about it to death, and so did others.
throwanem 6 hours ago [-]
To no avail, alas. But this is why we now see a thought leader publish a piece to say this is a thing it's now permissible not to be, indeed never to have been at all.
the_af 19 hours ago [-]
> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
mariusor 17 hours ago [-]
> you just know they actually mean "MoreRight".
I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, or not be right, while "being wrong" can cover a very large gradient.
I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would
use.
If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.
mananaysiempre 15 hours ago [-]
> I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary
Specifically (AFAIK) a reference to Asimov’s description[1] of the idea:
> [W]hen people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.
Cool, I didn't know the quote, nor that it was inspiration for the name. Thank you.
bmacho 29 minutes ago [-]
It's not even about the quote, or Asimov.
"Less wrong" is a concept that has a lot of connotations that just automatically appear in your mind and help you. What you wrote "It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight"." isn't bad because of Asimov said so, or because you were unaware of a reference, but because it's just bad.
the_af 17 hours ago [-]
> I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, nor not be right, while "being wrong" can cover a very large gradient.
I know that's what they mean at the surface level, but you just know it comes with a high degree of smugness and false modesty. "I only know that I know nothing" -- maybe, but they ain't no modern day Socrates, they are just a bunch of nerds going online with their thoughts.
mariusor 15 hours ago [-]
Sometimes people enjoy being clever not because they want to rub it in your face that you're not, but because it's fun. I usually try not to take it personally when I don't get the joke and strive to do better next time.
the_af 12 hours ago [-]
That's mildly insulting of you.
I do get the joke; I think it's an instance of their feelings of "rational" superiority.
Assuming the other person didn't get the joke is very... irrational of you.
mariusor 5 hours ago [-]
Like I said, I'm not trying to be a rationalist, or at least not this flavour of it. That being said, I apologise for the dig.
zahlman 15 hours ago [-]
>but you just know it comes with a high degree of smugness and false modesty
No; I know no such thing, as I have no good reason to believe it, and plenty of countering evidence.
astrange 14 hours ago [-]
Very rational of you, but that's the problem with the whole system.
If you want to avoid thinking you're right all the time, it doesn't help to be clever and say the logical opposite. "Rationally" it should work, but it's bad because you're still thinking about it! It's like the thinking of a pink elephant thing.
>If you want to avoid thinking you're right all the time, it doesn't help to be clever and say the logical opposite.
I don't understand how this is supposed to be relevant here. You seem to be falsely accusing me of doing such a thing, or of being motivated by simple contrarianism.
Again, your claim was:
> but you just know it comes with a high degree of smugness and false modesty
Why should I "just know" any such thing? What is your reason for "just knowing" it? It comes across that you have simply decided to assume the worst of people that you don't understand.
Matticus_Rex 16 hours ago [-]
So much projection.
the_af 12 hours ago [-]
I don't think I'm more clever than the average person, nor have I made this my identity or created a whole tribe around it, nor do I attend nor host conferences around my cleverness, rationality, or weird sexual fetishes.
In other words: no.
JamesBarney 16 minutes ago [-]
Rationalism is not about trying to be clever it's very much about trying to be a little less wrong. Most people are not even trying, which includes myself. I don't write down my predictions, I don't keep a list of my errors. I just show up to work like everyone else and don't worry about it.
I really don't understand all the claims that they intellectually smug and overconfident when they are the one group of people trying to do better. It really seems like all the hatred is aimed at the hubris to even try to do better.
greener_grass 21 hours ago [-]
I think there is an arbitrage going on where STEM types who lack background in philosophy, literature, history are super impressed by basic ideas from those subjects being presented to them by stealth.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
elt895 20 hours ago [-]
Are there other philosophy- or history-grounded sources that are comparable? If so, I’d love some recommendations. Yudkowsky and others have their problems, but their texts have an interesting points, are relatively easy to read and understand, and you can clearly see which real issues they’re addressing. From my experience, alternatives tend to fall into two categories: 1. Genuine classical philosophy, which is usually incredibly hard to read and after 50 pages I have no idea what the author is even talking about anymore.
2. Basically self help books that take one or very few idea and repeat them ad nouseam for 200 pages.
wannabebarista 17 hours ago [-]
Likely the best resource to learn about philosophy is the Stanford Encyclopedia of Philosophy [0]. It's meant to provide a rigorous starting point for learning about a topic, where 1. you won't get bogged down in a giant tome on your first approach and 2. you have references for further reader.
Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.
I've read Bertrand Russell's "A History of Western Philosophy" and it's the first ever philosophy book that I didn't drop after 10 pages, because of 2 things:
1- He's logic (or at least has the same STEM kind of logic that we use), so he builds his reasoning logically and not via bullshit associations like plays on words or contrived jumps.
2- He's not afraid to tell "this philosopher said that, it was an error", which is extremely new compared to other scholars who don't feel authorized to criticise even obvious errors.
Really recommend!
molochai 4 hours ago [-]
At the risk of being roasted for recommending pop-culture things, the podcast Philosophize This is pretty good for a high-level overview. I'm sure there are issues and simplifications, and it's certainly not actual source material. The nice part is it's sort of a start-to-finish, he goes from the start of philosophy to modern day stuff, which helps a ton in building foundational understanding without reading everything ever written.
NoGravitas 18 hours ago [-]
I don't know if there's anything like a comprehensive high-level guide to philosophy that's any good, though of course there are college textbooks. If you want real/academic philosophy that's just more readable, I might suggest Eugene Thacker's "The Horror of Philosophy" series (starting with "In The Dust Of This Planet"), especially if you are a horror fan already.
voidhorse 15 hours ago [-]
It's not a nice response but I would say: don't be so lazy. Struggle through the hard stuff.
I say this as someone who had the opposite experience: I had a decent humanities education, but an abysmal mathematics education, and now I am tackling abstract mathematics myself. It's hard. I need to read sections of works multiple times. I need to sit down and try to work out the material for myself on paper.
Any impression that one discipline is easier than another probably just stems from the fact that you had good guides for the one and had the luck to learn it when your brain was really plastic. You can learn the other stuff too, just go in with the understanding that there's no royal road to philosophy just as there's no royal road to mathematics.
sn9 13 hours ago [-]
People are likely willing to struggle through hard stuff if the applications are obvious.
But if you can't even narrow the breadth of possible choices down to a few paths that can be traveled, you can't be surprised when people take the one that they know that's also easier with more immediate payoffs.
ashwinsundar 18 hours ago [-]
I don't have an answer here either, but after suffering through the first few chapters of HPMOR, I've found that Yudk and others tech-bros posing as philosophers are basically like leaky, dumbed-down abstractions for core philosophical ideas. Just go to the source and read about utilitarianism and deontology directly. Yudk is like the Wix of web development - sure you can build websites but you're not gonna be a proper web developer unless you learn HTML, CSS and Javascript. Worst of all, crappy abstractions train you in some actively bad patterns that are hard to unlearn
It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?
AnimalMuppet 16 hours ago [-]
I'd like what the original sources would have written if someone had fed them some speak-clearly pills. Yudkowsky and company may have the dumbing-down problem, but the original sources often have a clarity problem. (That's why people are still arguing about what they meant centuries later. Not just whether they were right - though they argue about that too - but what they meant.)
Even better, I'd like some filtering out of the parts that are clearly wrong.
HDThoreaun 16 hours ago [-]
HPMOR is not supposed to be rigorous. It’s supposed to be entertaining in a way that rigorous philosophy is not. You could make the same argument about any of Camus’ novels but again that would miss the point. If you want something more rigorous yudkowsky has it, bit surprising to me to complain he isn’t rigorous without talking about his rigorous work.
FeepingCreature 17 hours ago [-]
In AI finetuning, there's a theory that the model already contains the right ideas and skills, and the finetuning just raises them to prominence. Similarly in philosophic pedagogy, there's huge value in taking ideas that are correct but unintuitive and maybe have 30% buy-in and saying "actually, this is obviously correct, also here's an analysis of why you wouldn't believe it anyway and how you have to think to become able to believe it". That's most of what the Sequences are: they take from every field of philosophy the ideas that are actually correct, and say "okay actually, we don't need to debate this anymore, this just seems to be the truth because so-and-so." (Though the comments section vociferously disagrees.)
And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.
For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.
margalabargala 6 hours ago [-]
Additional note: compatibilism is only obviously correct if you accept that "free will" actually just means "the experienced perception/illusion of free will" as described by Schopenhauer.
Using a slightly different definition of free will, suddenly Compatibilism becomes obviously incorrect.
And now it's been reduced to quibbling over definitions, thereby reinventing much of the history of philosophy.
FeepingCreature 5 hours ago [-]
I think free will as philosophically used is inherently self-defeating and one of the largest black marks on the entire field, to be fair.
margalabargala 6 hours ago [-]
> And it turns out if you do this, you can discard 90% of philosophy as historical detritus
This is just the story of the history of philosophy. Going back hundreds of years. See Kant and Hegel for notable examples.
FeepingCreature 5 hours ago [-]
Sure, and I agree that LW is doing philosophy in that sense.
sixo 19 hours ago [-]
To the Stem-enlightened mind, the classical understanding and pedagogy of such ideas is underwhelming, vague, and riddled with language-game problems, compared to the precision a mathematically-rooted idea has.
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
biofox 16 hours ago [-]
If you contextualise the outmoded ideas as part of the Great Conversation [1], and the story of how we reached our current understanding, rather than objective statements of fact, then they becomes a lot more valuable and worthy of study.
But isn't the content of LessWrong part of the Great Conversation too?
jay_kyburz 12 hours ago [-]
I have kids in high school. We sometimes talk about the difference between the black and white of math or science, and the wishy washy grey of the humanities.
You can be right or wrong in math. You have can an opinion in English.
bmacho 24 minutes ago [-]
You can be right or wrong in math and philosophy. You have can an opinion in any other sciences, physics, chemistry, biology, medical sciences, history, you name it.
HDThoreaun 9 hours ago [-]
You can definitely have wrong opinions in the humanities.
throwawaymaroon 17 hours ago [-]
[dead]
samuel 21 hours ago [-]
I don't claim that his work is original (the AI related probably is, but it's just tangentially related to rationalism), but it's clearly presented and is practical.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
bnjms 19 hours ago [-]
I think you’re mostly right.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
ashwinsundar 18 hours ago [-]
I think I'd rather subscribe to an actual religion, than listen to these weird rationalist types of people who seem to have solved the problem that is "everything". At least there is some interesting history to learn about with religion
bnjms 14 hours ago [-]
I would too if I could but organized religions make me uncomfortable even though I admire parts of them. Similar to my admiration you don’t need to like the rationality types or believe in their program to find one or more of their tools useful.
I’ll also respond to the silent downvoters apparent disagreement. CFAR holds workshops and a summer camp for teaching rationality tools. In HPMoR Harry discusses the way he thinks and why. I read it as more of a way to discuss EY’s views in fiction as much as fiction itself.
HDThoreaun 18 hours ago [-]
Rationalism largely rejects continental philosophy in favor of a more analytic approach. Yes these ideas are not new, but they’re not really the mainstream stuff you’d see in philosophy, literature, or history studies. You’d have to seek out these classes specifically to find them.
TimorousBestie 17 hours ago [-]
They largely reject analytic philosophy as well. Austin and Whitehead are roughly as detestable to a Rationalist as Foucault and Marx.
Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.
turtletontine 14 hours ago [-]
For example, I recall being in lot of arguments that are purely "semantical" in nature.
I believe this is what Wittgenstein called “language games”
throwaway314155 6 hours ago [-]
In spirit of playing said game, I believe you can just use the word "pedantic" these days.
ramon156 3 hours ago [-]
Sounds close to Yuval's book nexus which talks about the history of information gathering
20 hours ago [-]
hiAndrewQuinn 20 hours ago [-]
If you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.
quickthrowman 17 hours ago [-]
Your time would probably be better spent reading his magnum opus, Harry Potter and the Methods of Rationality.
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
NoGravitas 20 hours ago [-]
I have always considered Scott Aaronson the least bad of the big-name rationalists. Which makes it slightly funny that he didn't realize he was one until Scott Siskind told him he was.
wizzwizz4 19 hours ago [-]
Reminds me of Simone de Beauvoir and feminism. She wrote the book on (early) feminism, yet didn't consider herself a feminist until much later.
dcminter 22 hours ago [-]
Upvote for the play link - that's interesting and I hadn't heard of it before. Worthy of a top-level post IMO.
gooseus 14 hours ago [-]
I heard of the play originally from Chapter 10 of On Tyranny by Timothy Snyder:
Unfortunately it didn't a lot of traction and dang told me that there wasn't a way to re-up or "second chance" the post due to the HN policy on posts "correlated with political conflict".
dcminter 13 hours ago [-]
Ah, I guess I see his point; I can't see the discussion being about use of metaphor in political fiction rather than whose team is worst.
Still, I'm glad I now know the reference.
lukas099 16 hours ago [-]
This is vibe-based, but I think the Rationalists get more vitriol than they deserve. Upon reflecting, my hypothesis for this is threefold:
1. They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group. People tend not to like being in other peoples' out-groups.
2. They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
3. They're nerds. Whatever has historically caused nerds to be bullied/ostracized, they probably have.
Aurornis 14 hours ago [-]
> They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group.
The rationalist community is most definitely not exclusive. You can join it by declaring yourself to be a rationalist, posting blogs with "epistemic status" taglines, and calling yourself a rationalist.
The criticisms are not because it's a cool club that won't let people in.
> They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
Herein lies one of the problems with the rationalist community: For all of their talk about heterodox ideas and entertaining different viewpoints, they are remarkably lockstep in many of their opinions.
From the outside, it's easy to see how one rationalist blogger plants the seed of some topic and then it gets adopted by the others as fact. A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something, even though actual experts picked it apart from the beginning and showed how the author was misinterpreting studies, abusing statistics, and ignoring more prominent factors.
The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence and they do this very frequently.
lukas099 13 hours ago [-]
> The rationalist community is most definitely not exclusive.
I agree, and didn't intend to express otherwise. It's not an exclusive community, but it is a community, and if you aren't in it you are in the out-group.
> The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence
I don't know if this is true or not, but if it is I don't think it's why people scorn them. Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
const_cast 13 hours ago [-]
> Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
Yes, most people don't care how you arrived at an opinion, they rather care about the practical impact of said opinion. IMO this is largely a good thing.
You can logically push yourself to just about any opinion, even absolutely horrific ones. Everyone has implicit biases and everyone is going to start at a different starting point. The problem with string of logic for real-world phenomena is that you HAVE to make assumptions. Like, thousands of them. Because real-world phenomena are complex and your model is simple. Which assumptions you choose to make and in which directions are completely unknown, even to you, the one making said assumptions.
Ultimately most people aren't going to sit here and try to psychoanalyze why you made the assumptions you made and if you were abused in childhood or deduce which country you grew up in or whatever. It's too much work and it's pointless - you yourself don't know, so how would we know?
So, instead, we just look at the end opinion. If it's crazy, people are just going to call you crazy. Which I think is fair.
zahlman 7 hours ago [-]
> A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something
As proof of what, exactly? And where is your evidence that such a thing happened?
> while ignoring contradictory evidence and they do this very frequently.
The evidence available to me suggests that the rationalist community was not at all "lockstep" as regards the evaluation of SMTM's hypothesis.
gjm11 13 hours ago [-]
Lockstep like this? https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (a post on Less Wrong, karma score currently +442, versus +102 and +230 for the two posts it cites as earlier favourable LW coverage of the lithium claim -- the comments on both of which, by the way, don't look to me any more positive than "skeptical but interested")
Or maybe this https://substack.com/home/post/p-39247037 (I admit I don't know for sure whether the author considers himself a rationalist, but I found the link via a search for whether Scott Alexander had written anything about the lithium theory, which it looks like he hasn't, which turned this up in the subreddit dedicated to his writing).
Speaking of which, I can't find any sign that they got an ACX grant. I can find https://www.astralcodexten.com/p/acx-grants-the-first-half which is basically "hey, here are some interesting projects we didn't give any money to, with a one-paragraph pitch from each" and one of the things there is "Slime Mold Time Mold" talking about lithium; incidentally, the comments there are also pretty skeptical.
So I'm not really seeing this "gets adopted by the others as fact" thing in this case; it looks to me as if some people proposed this hypothesis, some other people said "eh, doesn't look right to me", and rationalists' attitude was mostly "interesting idea but probably wrong". What am I missing here?
Aurornis 12 hours ago [-]
> Lockstep like this? https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (a post on Less Wrong, karma score currently +442, versus +102 and +230 for the two posts it cites as earlier favourable LW coverage of the lithium claim -- the comments on both of which, by the way, don't look to me any more positive than "skeptical but interested")
That post came out a year later, in response to the absurdity of the situation. The very introduction of that post has multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
gjm11 11 hours ago [-]
> That post came out a year later [...] multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
The SMTM series started in July 2021 and finished in November 2021; there was also a paper, similar enough that I assume it's by the same people, from July 2021. The first of those "multiple links" is from July 2021, but the second is from January 2022 and the third from May 2022. The critical post is from June 2022. I agree it's a year later than something but I'm not seeing that the SMTM theory was "spreading ... with little question" a year before it.
The "multiple links" you mention -- the actual number is three -- are the two I mentioned before and a third that (my apologies!) I had somehow not noticed. That third one is at +74 karma, again much lower than the later critical post, and it doesn't endorse the lithium theory.
The one written by E.Y. is the second. Quite aside from the later disclaimer, it's hardly an uncritical endorsement: "you are still probably saying "Wait, lithium?" This is still mostly my own reaction, honestly." and "low-probability massive-high-value gamble".
What about the first post? That one's pretty positive, but to me it reads as "here's an interesting theory; it sounds plausible to me but I am not an expert" rather than "here's a theory that is probably right", still less "here's a theory that is definitely right".
The comments, likewise, don't look to me like lockstep uncritical acceptance. I see "here are some interesting experiments one could do to check this" and "something like this seems plausible but I bet the actual culprit is vegetable oils" and "something like this seems plausible but I bet the actual culprit is rising CO2 levels" and "I bet it's corn somehow" and "quite convincing but didn't really rule out the obvious rival hypothesis" and so forth; I don't think a single one of the comments is straightforwardly agreeing with the theory.
If you've found something Scott Alexander wrote about this then I'd be interested to see it. All I found was that (contrary to what you claimed above) it looks like ACX Grants declined to fund exploration of the lithium theory but included that proposal in a list of "interesting things we didn't fund".
So I'm just not seeing this lockstep thing you claim. Maybe I'm looking in the wrong places. The specific things you've cited don't seem like they support it: you said there was an ACX grant but there wasn't; you say the links in the intro to that critical post show the theory spreading with little question, but what they actually show is one person saying "here's an interesting theory but I'm not an expert", E.Y. saying "here's a theory that's probably wrong but worth looking into" (and later changing his mind), and another person saying "I put together some data that might be relevant"; in every case the comments are full of people not agreeing with the lithium theory.
zahlman 7 hours ago [-]
> The very introduction of that post has multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
By "multiple links" you're referring to the same "two posts". Again, they weren't as popular, nor were they as uncritical as you describe. From Yudkowsky's post, for example:
> If you know about the actual epidemiology of obesity and how ridiculous it makes the gluttony theory look, you are still probably saying "Wait, lithium?" This is still mostly my own reaction, honestly.... If some weird person wants to go investigate, I think money should be thrown at them, both to check the low-probability massive-high-value gamble
Yudkowsky's argument is emphatically not that the lithium claim is true. He was merely advocating for someone to fund a study. He explicitly describes the claim as "low-probability", and advocates on the basis of a (admittedly clearly subjective) expected-value calculation.
> One of the links is a Eliezer Yudkowsky blog praising the work
That does not constitute "praise" of the work. Yudkowsky only praised the fact that someone was bucking the trend of
> almost nobody is investigating it in a way that takes the epidemiological facts seriously and elevates those above moralistic gluttony theories
.
> Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
Nobody claimed that Yudkowsky ignored the theory.
lechatonnoir 8 hours ago [-]
Aside from the remark given in the other reply to your comment, I wonder what the standard is: how quickly should a community appear to correct its incorrect beliefs for them to not count as sheep?
z3c0 7 hours ago [-]
[dead]
woopwoop 9 hours ago [-]
Agree, but I think there is another, more important factor. They are a highly visible part of the internet, and their existence is mainly internet-based. This means that the people assessing them are mainly on the internet, and as we all know, internet discourse tends to the blandly negative (ironically my own comment is a mild example of this).
teamonkey 15 hours ago [-]
> This is vibe-based
You mean an empirical observation
lowbloodsugar 14 hours ago [-]
Three examples of feelings-based conclusions were presented. There is what is so, and how you feel about them. By all means be empirical about what you felt, and maybe look into that. “How this made me feel” describes the cause of how we got the USA today.
j_timberlake 6 hours ago [-]
I think these are all true and relevant, but the main problem is that their thesis that "ASI alignment will be extremely difficult" can only really be proven in hindsight.
It's like they're crying wolf but can't prove there's actually a wolf, only vague signs of one, but if the wolf ever becomes visible it will be way too late to do anything. Obviously no one is going to respect a group like that and many people will despise them.
johnfn 14 hours ago [-]
HN judges rationality quite severely. I mean, look at this thread about Mr. Beast[1], who it's safe to say is a controversial figure, and notice how all the top comments are all pretty charitable. It's pretty funny to take the conversation there and then compare the comments to this article.
Scott Aaronson - in theory someone HN should be a huge fan of, from all reports a super nice and extremely intelligent guy who knows a staggering amount about quantum mechanics - says he likes rationality, and gets less charity than Mr. Beast. Huh?
The people commenting under the Mr. Beast post are probably different to the people commenting under this post.
Anyway, Mr. Beast doesn't really pretend to be more than what he is afaik. In contrast, the Rationalist tendency to use mathematics (especially Bayes's theorem) as window dressing is really, really annoying.
directevolve 5 hours ago [-]
What HN has for the rationalist movement isn’t just annoyance - it’s deep contempt and hatred.
foldr 12 hours ago [-]
Most people are trying to be rational (to be sure, with varying degrees of success), and people who aren't even trying aren't really worth having abstract intellectual discussions with. I'm reminded of CS Lewis's quip in a different context that "you might just as well expect to be congratulated because, whenever you do a sum, you try to get it quite right."
throwaway314155 12 hours ago [-]
Being rational and rationalist are not the same thing. Funnily this sort of false equivalence that relies on being "technically correct" is at the core of what makes them...difficult.
foldr 11 hours ago [-]
There's more to the group identity than just applying logic and rationality, yes. But surely 'rationalists' of all people wouldn't want to take the position that some of their key ideas don't result primarily from applying rational thought processes. Any part of their worldview that doesn't follow immediately from foundational principles of logic and reason is presumably subject to revision based on evidence and argument.
sandspar 6 hours ago [-]
Fittingly enough, the Rationalist community talks about this a lot. The canonical article is here ("I can tolerate anything except the outgroup").*
The gist is that if people are really different from us then we tend to be cool with them. But if they're close to us - but not quite the same - then they tend to annoy us. Hacker News people are close enough to Rationalists that HN people find them annoying.
It's the same reason why e.g. Hitler-style Neo Nazis can have a beer with Black Nationalists, but they tend to despise Klan-style Neo Nazis. Or why Sunni and Shia Muslims have issues with each other but neither group really cares about Indigenous American religions or whatever.
Nope. Has nothing to do with them being nerds. They are actively dangerous, their views almost always lead to extremely reactionary politics. EA and RA are deeply anti-human. In some cases that manifests as a desire to subjugate humanity with an iron fist technocratic rule, in other cases it manifests as a desire to kill off humanity.
Either way, as an ideology it must be stopped. It should not be treated with kids gloves, it is an ideology that is actively influencing the ruling elites right now (JD Vance, Musk, Thiel are part of this cult, and also simultaneously believe in German-style Nazism, which is broadly compatible with RA). The only silver lining is that some of their ideas about power-seeking tactics are so ineffective they will never work -- in other words, humanity will prevail over these ghouls, because they came in with so many bad assumptions that they've lost touch with reality.
j_timberlake 7 hours ago [-]
If you're going to claim a group is "deeply anti-human ghouls", maybe include an example or 2 of this in your post.
Uhhrrr 7 hours ago [-]
In what specific way do you disagree with them?
creatonez 7 hours ago [-]
The support for eugenics and ethnic cleansing, the absolute obsession with strictly utilitarian ethics and ignorance of other ethics, the "kill a bunch of humans now so that trillions can live in the future" longtermist death cult, and the whole Roko's basilisk worship that usually goes like "one AI system can take over the entirety of humanity and start eating the galaxy, therefore we must forcefully jump in the driver seat of that dangerous AI right now so that our elite ideology is locked in for a trillion years of galactic evolution".
therealdrag0 7 hours ago [-]
Huh? They give millions of dollars to global humanitarian development funds saving at least 50,000 lives per year. Maybe you’re taking a few kooks who have a controversial lecture somewhere as representing everyone else.
creatonez 7 hours ago [-]
It is true that there were EAs before the conception of "longtermism" that were relatively mundane and actually helping humanity, and not part of a death cult. But those have been shunned from the EA movement for a while now.
comp_throw7 3 hours ago [-]
Global Health & Development is like 40% of EA funding today. Please stop making things up.
t_mann 23 hours ago [-]
> “You’re [X]?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
simianparrot 23 hours ago [-]
No actual person talks like that —- and if they really did, they’ve taken on the role of a fictional character. Which says a lot about the clientele either way.
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
bmacho 4 minutes ago [-]
> "—-"
0.o
.. I think it is plausible that there are people (readers) that find other people (bloggers) basically always right, and that would be the first think they would say to them if they met them. n=1, but there are some bloggers that I think are basically always right, and I am socially bad, so there is no way to tell what would I blurt out if I met them.
FeteCommuniste 22 hours ago [-]
I assumed that the stuff in quotes was a summary of the general gist of the conversations he had, not a word for word quote.
riffraff 22 hours ago [-]
I don't think GP objects to the literalness, as much as to the "I am known for always being right and I acknowledge it", which comes off as.. not humble.
Filligree 18 hours ago [-]
He is known for that, right or wrong.
Ar-Curunir 16 hours ago [-]
I mean, Scott's been wrong on plenty of issues, but of course he is not wont to admit that on his own blog.
johnfn 19 hours ago [-]
To be honest, if I encountered Scott Aaronson in the wild I would probably react the same way. The guy is super smart and thoughtful, and can write more coherently about quantum computing than anyone else I'm aware of.
NooneAtAll3 16 hours ago [-]
if only he stayed silent on politics...
kragen 19 hours ago [-]
Why would you comment on the post if you stopped reading near its beginning? How could your comments on it conceivably be of any value? It sounds like you're engaging in precisely the kind of shallow dismissal the site guidelines prohibit.
JohnMakin 18 hours ago [-]
Aren't you doing the same thing?
kragen 18 hours ago [-]
No, I read the comment in full, analyzed its reasoning quality, elaborated on the self-undermining epistemological implications of its content, and then related that to the epistemic and discourse norms we aspire to here. My dismissal of it is anything but shallow, though I am of course open to hearing counterarguments, which you have fallen short of offering.
casey2 17 hours ago [-]
You are clearly dismissing his experience without much though. If a park ranger stopped you in the woods and said there was a mountain lion up ahead would you argue that he doesn't have enough information to be sure from such a quick glance?
Someone spending a lot of time to build one or multiple skills doesn't make them an expert on everything, but when they start talking like they are an expert on everything because of the perceived difficulties of one or more skills then red flags start to pop up and most reasonable people will notice them and swiftly call them out.
For example Elon Musk saying "At this point I think I know more about manufacturing than anyone currently alive on earth" even if you rationalize that as an out of context deadpan joke it's still completely correct to call that out as nonsense at the very least.
The more a person rationalizes statements like ("AI WILL KILL US ALL") these made by a person or cult the more likely it is that they are a cult member and they lack independent critical thinking, as they outsourced their thinking to group. Maybe their thinking is "the best thoughts", in-fact it probably is, but it's dependent on the group so their individual thinking muscle is weaken, which increases their morbidity (Airstricking a data center will get you killed or arrested by the US Gov. So it's better for the individual to question such statements rather than try to rationalize them using unprovable nonsense like god or AGI).
johnfn 16 hours ago [-]
If a park ranger said "I looked over the first 2% of the park and concluded there's no mountain lions" - that is, made an assessment on the whole from inspection of a narrow segment - I don't think I would take his word on the matter. If OP had more experience to support his statement, he should have included it, rather than writing a shallow, one-sentence dismissal.
TeMPOraL 14 hours ago [-]
I think the recently popular way of saying "I looked over 2% and assumed it generalizes" these days is to call the thing ergodic.
Which of course the blog article is not, but then at least the complaint wouldn't sound so obviously shallow.
johnfn 13 hours ago [-]
That's great, I'm definitely going to roll that into my vocabulary.
dcminter 23 hours ago [-]
Also...
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
ARandumGuy 17 hours ago [-]
At least one cult originates from the Rationalist movement, the Zizians [1]. A cult that straight up murdered at least four people. And while the Zizian belief system is certainly more extreme then mainstream Rationalist beliefs, it's not that much more extreme.
For more info, the Behind the Bastards podcast [2] did a pretty good series on how the Zizians sprung up out of the Bay area Rationalist scene. I'd highly recommend giving it a listen if you want a non-rationalist perspective on the Rationalist movement.
There's a lot more than one of them. Leverage Research was the one before Zizians.
Those are only named cults though; they just love self-organizing into such patterns. Of course, living in group homes is a "rational" response to Bay Area rents.
cubefox 17 hours ago [-]
[flagged]
ARandumGuy 17 hours ago [-]
Ziz did leave the rationalist movement, but to imply there's no link is simply dishonest. Ziz spent a lot of time in the Bay area Rationalist scene, and a lot of her belief system has clear Rationalist origins (such as timeless decision theory and the general obsession with an all-powerful AI). Ziz did not come up with this stuff in a vacuum, she got it from being a rationalist.
Also, I'm not sure if you did this intentionally, but Ziz is a trans woman. She may have done some awful shit, but that doesn't justify misgendering her.
cubefox 16 hours ago [-]
Just because there is some weak relation to rationalism doesn't justify guilt-by-association. You could have pointed equally out that most of the Zizians identified as MtF transexuals and vegans, and then blamed the latter groups for being allegedly extreme. Which would be no less absurd.
> She may have done some awful shit
Murdering several people is slightly worse than "awful shit".
jcranmer 16 hours ago [-]
It's not just some weak relation.
The Behind the Bastards podcast works by having the podcaster invite a guest on the show and tell the story of an individual (or movement, like the Zizians) to provide a live reaction. And in the discussion about the Zizians, the light-bulb moment for the guest, the point where they made the connection "oh, now I can see how this story is going to end up with dead people," happens well before Ziz breaks with the Rationalists.
She ultimately breaks with the Rationalists because they don't view animal welfare as important as a priority as she does. But it's from the Rationalists that she picks up on the notion that some people are net negatives to society... and that if you're a net negative to society, then perhaps you're better off dead. It's not that far a leap to go from there to "it's okay for me to kill people if they are a net negative to society [i.e., they disagree with me]."
cubefox 15 hours ago [-]
> But it's from the Rationalists that she picks up on the notion that some people are net negatives to society
That belief has nothing to do specifically with rationalism. (In fact, I think most people believe that some people are net negative for society [otherwise, why prisons?], but there is no indication that this belief would be more prevalent for rationalists.)
jcranmer 16 hours ago [-]
The podcast Behind the Bastards described Rationalism not as a cult but as the fertile soil which is perfect for growing cults, leading to the development of cults like the Zizians (who both the Rationalists and Zizians are at pains to emphasize their mutual hostility to one another, but if you're not part of either movement, it's pretty clear how Rationalism can lead to something like the Zizians).
astrange 14 hours ago [-]
I don't think that podcast has very in-depth observations. It's just another iteration of east coast culture media people who used to be on Twitter a lot, isn't it?
> the fertile soil which is perfect for growing cults
This is true but it's not rationalism, it's just that they're from Berkeley. As far as I can tell if you live in Berkeley you just end up joining a cult.
sapphicsnail 12 hours ago [-]
I lived in Berkeley for a decade and there weren't many people I would say were in a cult. It's actually quite the opposite. There's way more willingness to be weird and do your own thing there.
Most of the rationalists I met in the Bay Area moved there specifically to be closer to the community.
Cult member: It's not a cult! It's an organization that promotes love and..
Hank Hill: This is it.
throwaway314155 18 minutes ago [-]
Does this quote make more sense in context? I don't watch the show.
dcminter 17 hours ago [-]
Extreme eagerness to disavow accusations of cultishness ... doth the lady protest too much perhaps? My hobby is occasionally compared to a cult. The typical reaction of an adherent to this accusation is generally "Heh, yeah, totally a cult."
Edit: Oh, but you call him "Guru" ... so on reflection you were probably (?) making the same point... (whoosh, sorry).
FeepingCreature 17 hours ago [-]
> Extreme eagerness to disavow accusations of cultishness ... doth the lady protest too much perhaps?
You don't understand how anxious the rationalist community was around that time. We're not talking self-assured confident people here. These articles were written primarily to calm down people who were panickedly asking "we're not a cult, are we" approximately every five minutes.
19 hours ago [-]
junon 23 hours ago [-]
I got to that part, thought it was a joke, and then... it wasn't.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
joenot443 20 hours ago [-]
Scott's done a lot of really excellent blogging in the past. Truthfully, I think you risk depriving yourself of great writing if you're willing to write off an author because you didn't like one sentence.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
derangedHorse 22 hours ago [-]
Is it not a joke? I’m pretty sure it was.
lcnPylGDnU4H9OF 20 hours ago [-]
It doesn’t really read like a joke but maybe. Regardless, I guess I can at least be another voice saying it didn’t land. It reads like someone literally said that to him verbatim and he literally replied with a simple, “Yes.” (That said, while it seems charitable to assume it was a joke but that doesn’t mean it’s wrong to assume that.)
geysersam 15 hours ago [-]
I'm certain it's a joke. Have you seen any Scott Aaronson lecture? He can't help himself from joking in every other sentence
IshKebab 19 hours ago [-]
I think the fact that we aren't sure says a lot!
myko 20 hours ago [-]
I laughed, definitely read that way to me
alphan0n 22 hours ago [-]
If that was a joke, all of it is.
*Guess I’m a rationalist now.
James_K 19 hours ago [-]
[flagged]
PoignardAzur 12 hours ago [-]
As someone who likes both the Rationalist community and the Rust community, it's fascinating to see the parallels in how the Hacker News crowd treats both.
The contempt, the general lack of curiosity and the violence of the bold sweeping statements people will make here are mind-boggling.
Aurornis 12 hours ago [-]
> the general lack of curiosity
Honestly, I find the Hacker News comments in recent years to be most enlightening because so many comments come from people who spent years immersed in rationalist communities.
For years one of my friend groups was deep into LessWrong and SSC. I've read countless blog posts and other content out of those groups.
Yet every time I write about it, I'm dismissed as an uninformed outsider. It's an interesting group of people who like to criticize and dissect other groups, but they don't take kindly to anyone questioning their own circles.
zahlman 7 hours ago [-]
> Yet every time I write about it, I'm dismissed as an uninformed outsider.
No; you're being dismissed as someone who is entirely too credulous about arguments that don't hold up to scrutiny.
Edit: and as someone who doesn't understand basics about what rationalists are trying to accomplish in certain contexts (like the concept of a calibration curve re the example you brought up of https://www.astralcodexten.com/p/grading-my-2021-predictions). You come across (charitably) as having missed the point, because you have.
comp_throw7 3 hours ago [-]
You keep saying things that are proven to be trivially false with 5 seconds of examination, so I think the dismissal is justified.
cosmojg 12 hours ago [-]
Both the Rationalist community and the Rust community are very active in pursuing their goals, and unfortunately, it's far easier to criticize others for doing things than it is to actually do things yourself. Worse yet, if you are not yourself actively doing things, you are far more likely to experience fear when other people are actively doing things as there is always some nonzero chance that they will do things counter to your own goals, forcing you to actively do something lest you fall behind. Alas, people often respond to fear with hatred, especially given the benefit of physical isolation and dissociation from humanity offered by the Internet, and I think that's what you're seeing here on Hacker News.
georgeecollins 10 hours ago [-]
I thought HN liked Rust? I learned Rust because I heard about it on HN back in the day. I don't want to start a debate about programming languages but if I were going to look for Rust fans, this is where I would expect to find them.
wavemode 7 hours ago [-]
I mostly see non-Rust languages' communities on the defensive on HackerNews, not the other way around.
If you open any thread related to Zig or Odin or C++ you can usually ctrl-F "Rust" and find someone having an argument about how Rust is better.
What kind of commentary here have you heard here around the Rust community? I haven’t heard much. Certainly nothing close to violent. I’ve heard lots of shit talking regarding Go because they refused to introduce generics - and to be fair, the reasoning behind the refusal was smug arrogance led by the pompous cock rpike.
I will say as someone who has been programming before we had standardized C++ that “programming communities” aren’t my cup of tea. I like the passion and enthusiasm but it would be good for some of those lads to have a drag, see a shrink and get some nookie.
NoGravitas 20 hours ago [-]
Probably the most useful book ever written about topics adjacent to capital-R Rationalism is "Neoreaction, A Basilisk: Essays on and Around the Alt-Right" [1], by Elizabeth Sandifer. Though the topic of the book is nominally the Alt-Right, a lot more of it is about the capital-R Rationalist communities and individuals that incubated the neoreactionary movement that is currently dominant in US politics. It's probably the best book to read for understanding how we got politically and intellectually from where we were in 2010, to where we are now.
If you want a book on the rationalists that's not a smear dictated by a person who is banned from their Wikipedia page for massive npov violations, I hear Chivers' The AI Does Not Hate You and Rationalist's Guide to the Galaxy are good.
(Disclaimer: Chivers kinda likes us, so if you like one book you'll probably dislike the other.)
Matticus_Rex 15 hours ago [-]
> Probably the most useful book
You mean "probably the book that confirms my biases the most"
kragen 18 hours ago [-]
Thanks for the recommendation! I hadn't heard about the book.
Aurornis 13 hours ago [-]
Ironically, bringing this topic up always turns the conversation to ad-hominem attacks about the messenger while completely ignoring the subject matter. That's exactly the type of argument rationalists claim to despise, but it gets brought up whenever inconvenient arguments appear about their own communities. All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
If anyone wants to actually engage with the topic instead of trying to ad-hominem it away, I suggest at least reading Scott Alexander's own words on why he so frequently engages in neoreactionary topics: https://www.reddit.com/r/SneerClub/comments/lm36nk/comment/g...
Some select quotes:
> First is a purely selfish reason - my blog gets about 5x more hits and new followers when I write about Reaction or gender than it does when I write about anything else, and writing about gender is horrible. Blog followers are useful to me because they expand my ability to spread important ideas and network with important people.
> Third is that I want to spread the good parts of Reactionary thought
> Despite considering myself pretty smart and clueful, I constantly learn new and important things (like the crime stuff, or the WWII history, or the HBD) from the Reactionaries. Anything that gives you a constant stream of very important new insights is something you grab as tight as you can and never let go of.
In this case, HBD means "human biodiversity" which is the alt-right's preferred term for racialism, or the division of humans into races with special attention to the relative intelligence of those different races. This is an oddly recurring theme on Scott Alexander's work. He even wrote a coded blog post to his followers about how he was going to deny it publicly while privately holding it to be very correct.
zahlman 7 hours ago [-]
> Ironically, bringing this topic up always turns the conversation to ad-hominem attacks about the messenger while completely ignoring the subject matter.
This is not a fair or accurate characterization of the criticism you're referring to.
> All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
They are not doing any such thing. The content is being dismissed because it has been repeatedly evaluated before and found baseless. The arguments are acknowledged as specious. Sandifer makes claims that are not supported by the evidence and are in fact directly contradicted by the evidence.
mananaysiempre 18 hours ago [-]
That book, IMO, reads very much like a smear attempt, and not one done with a good understanding of the target.
The premise, with an attempt to tie capital-R Rationalists to the neoreactionaries though a sort of guilt by association, is frankly weird: Scott Alexander is well-known among the former to be essentially the only prominent figure that takes the latter seriously—seriously enough, that is, to write a large as-well-stated-as-possible survey[1] followed by a humongous point-by-point refutation[2,3]; whereas the “cult leader” of the rationalists, Yudkowsky, is on the record as despising neoreactionaries to the point of refusing to discuss their views. (As far as recent events, Alexander wrote a scathing review of Yarvin’s involvement in Trumpist politics[4] whose main thrust is that Yarvin has betrayed basically everything he advocated for.)
The story of the book’s conception also severely strains an assumption of good faith[5]: the author, Elizabeth Sandifer, explicitly says it was to a large extent inspired, sourced, and edited by David Gerard, a prominent contributor to RationalWiki and r/SneerClub (the “sneerers” mentioned in TFA) and Wikipedia administrator who after years of edit-warring got topic-banned from editing articles about Scott Alexander (Scott Siskind) for conflict of interest and defamation[6] (including adding links to the book as a source for statements on Wikipedia about links between rationalists and neoreaction). Elizabeth Sandifer herself got banned for doxxing a Wikipedia editor during Gerard's earlier edit war at the time of Manning's gender transition, for which Gerard was also sanctioned[7].
I always find it interesting that when the topic of rationalists' fixation on neoreactionary topics comes into question, the primary defenses are that it's important to look at controversial ideas and that we shouldn't dismiss novel ideas because we don't like the group sharing them.
Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
It's no secret that Scott Alexander had a bit of a fixation on neoreactionary content for years. The leaked e-mails showed he believed there to be "gold" in some of their ideas and he enjoyed the extra traffic it brought to his blog. I know the rationalist community has been working hard to distance themselves from that era publicly, but dismissing that chapter of the history because it feels too much like a "smear" or because we're not supposed to like the author feels extremely hypocritical given the context.
throwaway314155 1 minutes ago [-]
> The leaked e-mails
Curious to read these. Got a source?
FeepingCreature 8 hours ago [-]
There are certain parts of the history of the rationalist movement that its enemies are orders of magnitude more "fixated" on than rationalists ever were, Neoreaction and the Basilisk being the biggest.
Part of evaluating unusual ideas is that you have to get really good at ignoring bad ones. So when somebody writes a book called "Neoreaction: a Basilisk" and claims that it's about rationality, I make a very simple expected-value calculation.
zahlman 7 hours ago [-]
> when the topic of rationalists' fixation on neoreactionary topics comes into question, the primary defenses are that it's important to look at controversial ideas and that we shouldn't dismiss novel ideas because we don't like the group sharing them.
No. Rationalists do say that it's important to do those things, because that's true. But it is not a defense of a "fixation on neoreactionary topics", because there is no such fixation. It only comes across as a fixation to people who are unwilling to even understand what they are denigrating.
You will note that Scott Alexander is heavily critical of neoreaction.
> Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
No. Nobody said that those criticism should be ignored. What was said is that those criticism are invalid, because they are. It is not ad-hominem against Sandifer to point out that Sandifer is trying to insinuate untrue things about Alexander. It is simply observing reality. Sandifer attempts to describe Alexander, Yudkowsky et. al. as supportive of neoreactionary thought. In reality, Alexander, Yudkowsky et. al. are strongly-critical-at-best of neoreactionary thought.
> The leaked e-mails showed he believed there to be "gold" in some of their ideas
You are engaging in the same kind of semantic games that Sandifer does. Please stop.
19 hours ago [-]
kurtis_reed 15 hours ago [-]
The "neoreactionary movement" is definitely not dominant
zahlman 15 hours ago [-]
> incubated the neoreactionary movement that is currently dominant in US politics
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
You are presenting a highly contentious worldview for the sake of smearing an outgroup. Please don't. Further, the smear relies on guilt by association that many (including myself) would consider invalid on principle, and which further doesn't even bear out on cursory examination.
At least take a moment to see how others view the issue. "Reliable Sources: How Wikipedia Admin David Gerard Launders His Grudges Into the Public Record" https://www.tracingwoodgrains.com/p/reliable-sources-how-wik... includes lengthy commentary on Sandifer (a close associate of Gerard)'s involvement with rationalism, and specifically on the work you cite and its biases.
IlikeKitties 17 hours ago [-]
I once was interested in a woman who was really into the effective altruism/rationalism crowd. I went to a few meetings with her but my inner contrarian didn't like it.
Took me a few years to realize how cultish it all felt and that I am somewhat happy my edgy atheist contrarian personality overwrote my dicks thinking with that crowd.
Absolutely everybody names it wrong. The movement is called rationality or "LessWrong-style rationality", explicitly to differentiate it from rationalism the philosophy; rationality is actually in the empirical tradition.
But the words are too close together, so this is about as lost a battle as "hacker".
gjm11 13 hours ago [-]
I don't think "rationality" is a good name for the movement, for the same reason as I wish "effective altruism" had picked a different name: it conflates the goal with the achievement of the goal. A rationalist (in the Yudkowsky sense) is someone who is trying to be rational, in a particular way. But "rationality" means actually being rational.
I don't think it's actually true that rationalists-in-this-sense commonly use "rationality" to refer to the movement, though they do often use it to refer to what the movement is trying to do.
FeepingCreature 9 hours ago [-]
Yeah, that's why they famously say they're an "aspiring rationalist." But I don't think there's anything wrong with setting a target, even if it's unreachable.
gjm11 1 hours ago [-]
No, absolutely nothing wrong with setting ambitious and unreachable targets. Reach for the stars and you might at least get to the moon, etc. But if you say "I'm an effective altruist" then you are claiming to actually be effective and if you say "our movement is Rationality" then you are claiming to be paradigmatic examples of actually being rational, and I think that's a bad idea because it should be possible to say "I'm part of such-and-such a movement" without making that kind of claim.
FeepingCreature 31 minutes ago [-]
Oh yeah, I don't think anyone should claim to be effective or rational, per se. Fwiw the original Sequences make it very clear that a rationalist is supposed to be somebody who pursues rationality, rather than possessing it.
So you say it should be possible to avoid making this claim. I agree, and I believe Eliezer tried! Unfortunately, it was attributed to him anyway.
gjm11 27 minutes ago [-]
Right. And I think that using "Rationality" as a name for the movement as opposed to its goals does amount to making that claim, which is one reason why (so far as I can see) EY never actually uses the word in quite that way and I don't think others should either.
thomasjudge 17 hours ago [-]
Along these lines I am sort of skimming articles/blogs/websites about Lightcone, LessWrong, etc, and I am still struggling with the question...what do they DO?
Mond_ 16 hours ago [-]
Look, it's just an internet community of people who write blog posts and discuss their interests on web forums.
Asking "What do they do?" is like asking "What do Hackernewsers do?"
It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.
FeepingCreature 16 hours ago [-]
Twenty years or so ago, Eliezer Yudkowsky, a former proto-accelerationalist, realized that superintelligence was probably coming, was deeply unsafe, and that we should do something about that. Because he had a very hard time convincing people of this to him obvious fact, he first wrote a very good blog about human reason, philosophy and AI, in order to fix whatever was going wrong in people's heads that caused them to not understand that superintelligence was coming and so on. The group of people who read, commented on and contributed to this blog are called the rationalists.
(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)
astrange 13 hours ago [-]
> hat superintelligence was probably coming, was deeply unsafe
Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.
The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.
Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.
kurtis_reed 15 hours ago [-]
Hang out and talk
dennis_jeeves2 15 hours ago [-]
[dead]
roenxi 23 hours ago [-]
The irony here is the Rationalist community are made up of the ones who weren't observant enough to pick that "identifying as a Rationalist" is generally not a rational decision.
MichaelZuo 23 hours ago [-]
From what I’ve seen it’s a mix of that, some who avoid the issue, and some who do it intentionally even though they don’t really believe it.
tptacek 16 hours ago [-]
Well that was a whole thing. I especially liked the existential threat of Cade Metz. But ultimately, I think the great oracle of Chicago got this whole thing right when he said:
-Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself. I quote John Lennon, "I don't believe in Beatles, I just believe in me." Good point there. After all, he was the walrus. I could be the walrus. I'd still have to bum rides off people.
Aurornis 11 hours ago [-]
> I especially liked the existential threat of Cade Metz.
I am perpetually fascinated by the way rationalists love to dismiss critics by pointing out that they met some people in person and they seemed nice.
It's such a bizarre meme.
Curtis Yarvin went to one of the "Vibecamp" rationalist gatherings, was nice to some prominent Twitter rationalists, and now they are ardent defenders of him on Twitter. Their entire argument is "I met him and he was nice".
It's mind boggling that the rationalist part of their philosophy goes out the window as soon as the lines are drawn between in-group and out-group.
Bringing up Cade Metz is a perennial favorite signal because of how effectively they turned it into a "you're either with us or against us" battle, completely ignoring any valid arguments Cade Metz may have been brought to the table. Then you look at how they treat Neoreactionaries and how we're supposed to look past our disdain for them and focus on the possible good things in their arguments, and you realize maybe this entire movement isn't really about truth-seeking as much as they think it is.
tptacek 9 hours ago [-]
I'm hyperfixated on the idea that they're angry at Metz for "outing" Scott Alexander, who published some of his best-known posts under his own name.
comp_throw7 2 hours ago [-]
Your persistent refusal to acknowledge that Scott did not want to go from a world where his patients Googling his real name did _not_ immediately lead to his blog, to a world where it _did_ immediately lead to his blog, and that was his primary (and valid) objection to having his real first and last name put into print, is baffling.
autarch 6 hours ago [-]
Isn't "Scott Alexander" his first and middle name? Did he actually use his last name on his blog before the NYT published that piece?
tptacek 5 hours ago [-]
Yes.
twoodfin 7 hours ago [-]
I genuinely worry that the Rationalists are actually Fascist Anarchists who don’t own cars.
leshow 7 hours ago [-]
Ideology is at its most powerful when people think it doesn't exist.
dragonwriter 16 hours ago [-]
> Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself
There's an -ism for that.
Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
And the (well, a) term for the entire problem is "non-dual awareness".
radicalbyte 23 hours ago [-]
* 20 somethings who are clearly on spectrum
* Group are "special"
* Centered around a charismatic leader
* Weird sex stuff
Guys we have a cult!
toasterlovin 16 hours ago [-]
Also:
* Communal living
* Sacred texts & knowledge
* Doomsday predictions
* Guru/prophet lives on the largesse of followers
It's rich for a group that claims to reason based on priors to completely ignore that they possess all the major defining characteristics of a cult.
timmytokyo 6 hours ago [-]
Below is a list of cult markers I posted here the last time this came up. It recaps some of the bullet points above.
1. Apocalyptic world view.
2. Charismatic and/or exploitative leader.
3. Insularity.
4. Esoteric jargon.
5. Lack of transparency or accountability (often about finances or governance).
6. Communal living arrangements.
7. Sexual mores outside social norms, especially around the leader.
8. Schismatic offshoots.
9. Outsized appeal and/or outreach to the socially vulnerable.
nancyminusone 18 hours ago [-]
They seem to have a lot in common with the People's Front of Judea (or the Judian People's Front, for that matter).
krapp 23 hours ago [-]
These are the people who came up with Roko's Basilisk, Effective Altruism and spawned the Zizians. I think Robert Evans described them not as a cult but as a cult incubator, or something along those lines.
hollerith 4 hours ago [-]
Effective altruism wasn't started by the Berkeley rationalists.
They have separate origins, but have come to overlap.
antondd 4 hours ago [-]
YCombinator but for cults
ausbah 18 hours ago [-]
so many of the people i’ve read in these rationalist groups sound like they need a hug and therapy
nathcd 17 hours ago [-]
Some of the comments here remind me of online commentary about some place called "the orange site". Always wondered who they were talking about...
mitthrowaway2 14 hours ago [-]
Can't stand that place. Those people are all so sure that they're right about everything.
Nursie 2 hours ago [-]
Honestly, as a daily reader and oftentimes commenter on the dreaded orange site, I've seen a lot of criticism of its denizens, very often along the lines of -
"Computer people who think that because they're smart in one area they have useful opinions on anything else, holding forth with great certainty about stuff they have zero undertanding or insight into"
And you know what, I think they're right. The rest of you are always doing that sort of thing!
(/s, if it's necessary...)
cue_the_strings 23 hours ago [-]
I feel like I'm witnessing something that Adam Curtis would cover in the last part of The Century of Self, in real time.
greener_grass 23 hours ago [-]
There was always an underlying Randian impulse to the EA crowd - as if we could solve any issue if we just get the right minds onto tackling the problem. The black-and-white thinking, group think, hero worship and charicaturist literature are all there.
cue_the_strings 20 hours ago [-]
I always wondered is it her direct influence, or is it just that those characteristics naturally "go together".
"In particular, several women in the community have made allegations of sexual misconduct, including abuse and harassment, which they describe as pervasive and condoned."
There's weird sex stuff, logically, it's a cult.
Matticus_Rex 15 hours ago [-]
Most weird sex stuff takes place outside of cults, so that doesn't follow.
jrm4 19 hours ago [-]
My eyes started to glaze over after a bit; so what I'm getting here is there a group that calls themselves "Rationalists," but in just about every externally meaningful sense, they're smelling like -- perhaps not a cult, but certainly a lot of weird insider/outsider talk that feels far from rational?
pja 19 hours ago [-]
Capital r-Rationalism definitely bleeds into cult-like behaviour, even if they haven’t necessarily realised that they’re radicalising themselves.
They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians
amarcheschi 23 hours ago [-]
They call themselves rationalist, yet they don't have very rational opinions if you ask them about scientific racism [1]
As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational
tptacek 6 hours ago [-]
The Lynn IQ data isn't so much "flawed" as it is "fraudulent". There has never been a global comparative study of IQ; Lynn exploited the fact that IQ is a common diagnostic tool, even in places that don't fund a lot of social science research and thus don't conduct their own internal cross-sectional studies, and so ended up comparing diagnostic tests done at hospitals in the developing world with research studies done of volunteers in the developed world.
exoverito 19 hours ago [-]
Human ethnic groups are measurably different in genetic terms, as based on single nucleotide polymorphisms and allelic frequency. There are multiple PCA plots of the 1000 Genomes dataset which show clear cluster separation based on ancestry:
We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.
In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.
Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.
tptacek 6 hours ago [-]
GWAS results are so inhospitable to race/IQ talking points that race/IQ people have for the last several years been slagging GWAS, and molecular genomics more generally, and saying the twin studies work was the right approach all along. It's a whole thing.
mock-possum 17 hours ago [-]
…but that just sounds like scientific racism.
Rather than judging an individual on their actual intelligence, these kinds of statistical trends allow you to justify judging an individual based on their race, because you feel you can credibly claim that race is an acceptable proxy for their genome, is an acceptable proxy for their intelligence.
Or for their trustworthiness, or creativity, or sexuality, or dutifulness, or compassion, or aggressiveness, or alacrity, or humility, etc etc.
When you treat a person like a people, that’s still prejudice.
DangitBobby 3 hours ago [-]
I don't know what you're trying to say. That there is no race IQ difference? That we should not determine if there is one? How does one act on the feelings you've expressed?
FeepingCreature 16 hours ago [-]
Well, the rational thing is obviously to be scared of what ideas sound like.
> Rather than judging an individual on their actual intelligence
Actual intelligence is hard to know! However, lots of factors allow you to make a rapid initial estimate of their actual intelligence, which you can then refine as required.
(When the factors include apparent genetic heritage, this is called "racism" and society doesn't like it. But that doesn't mean it doesn't work, just that you can get fired and banned for doing it.)
((This is of course why we must allow IQ tests for hiring; then there's no need to pay attention to skin color, so liberals should be all for it.))
const_cast 12 hours ago [-]
> Well, the rational thing is obviously to be scared of what ideas sound like.
Yes, actually. If an idea sounds like it can be used to commit crimes against humanity, you should pause. You should reassess said idea multiple times. You should be skeptical. You shouldn't ignore that feeling.
What a lot of people are missing is intent - the human element. Why were these studies conducted? Who conducted them?
If someone insane conducts a study then yes - that is absolutely grounds to be skeptical of said study. It's perfectly rationale. If extremely racist people produce studies which just so happen to be racist, we should take a step back and go "hmm".
Being right or being correct is one thing, but it's not absolutely valuable. The end-result and how "bad" it is also matters, and often times it matters more. And, elephant in the room, nobody actually knows if they're right. Making logical conclusions isn't so, because you are forced to make thousands of assumptions.
You might be right, you might not be. Let's all have some humility.
tptacek 6 hours ago [-]
We do allow IQ tests for hiring. That we don't is a super-pernicious online myth. Some of the largest companies in America, with extremely deep pockets for employment law plaintiffs attorneys to come after, do in fact use IQ tests. Most companies don't, though, because they don't work.
FeepingCreature 5 hours ago [-]
Huh. Interesting, thank you, I didn't know that!
tptacek 5 hours ago [-]
(The "they don't work" thing is my editorial gloss; the rest of it though is I think objectively verifiable.)
FeepingCreature 5 hours ago [-]
Um, I asked Grok and aren't IQ tests soft-outlawed under disparate impact? As opposed to skill qualifications, it's hard to prove that, say, you need to be "intelligent" to work at a certain job. You would end up having to argue that intelligence even exists in court, which I can see why companies would try to avoid- ie. disparate impact creates a reversed weight of evidence. If your test has disparate impact, you now have to prove that it's necessary, creating a chicken-and-egg problem.
tptacek 4 hours ago [-]
Nope. It's trivially easy to pull up (a small minority of) Fortune 500 companies that use IQ tests in their hiring processes. The companies that offer these tests brag about the companies that use them. Whatever Grok thinks about this doesn't really much matter in the face of that evidence.
FeepingCreature 4 hours ago [-]
I think they use "aptitude tests" or "personality tests" that are at least packaged up to look relevant to the job, not direct naked IQ tests? I can't offhand find companies using actual IQ tests in hiring.
And if you're saying "well those are just repackaged IQ tests, so doesn't it count", then 1. it sure seems like IQ tests are illegal then, but 2. it also seems like they're so useful that companies are trying to smuggle them in anyway?
stonogo 12 hours ago [-]
The assertion that "actual intelligence is hard to know" followed almost immediately by "apparent genetic heritage" is what's wrong with your opinion. And no, it doesn't work -- at least it doesn't work for identifying intelligence. It just works for selecting people who appeal to your bias.
IQ tests are not actual measurements of anything; this is both because nobody has a rigorous working definition of intelligence and because nobody's figured out a universal method of measuring achievement of what insufficient definitions we have. Their proponents are more interested in pigeonholing people than actually measuring anything anyway.
And as a hiring manager, I'd hire an idiot who is good at the job over a genius who isn't.
FeepingCreature 9 hours ago [-]
Actual intelligence is hard to know precisely/accurately/confidently, is what I meant, my bad. It's not hard to place a guess, and there's nothing wrong with it either so long as you're open for refinement. Your brain does it automatically in the first few seconds of seeing somebody, you can't even prevent it, and skin color is by no means the largest factor in it.
IQ as a metric is correlated with almost every life outcome. It's one of the most reliable metrics in psychology.
As a hiring manager, if you think an idiot can be good at the job, you either hire for an undemanding job or I'm not sure if you're good at yours.
tptacek 6 hours ago [-]
SES, personality, and education are also correlated with life outcomes. Craniometry used to be one of the most reliable metrics in psychology.
FeepingCreature 6 hours ago [-]
Sure, I'm not saying IQ is everything. Do you have articles on craniometry actually being validated by studies? That sounds interesting.
tptacek 6 hours ago [-]
SES, education, and personality are intimately interrelated with "IQ". Once you control for those, what's the leftover signal, and how do you know?
FeepingCreature 5 hours ago [-]
I mean, if IQ is real then SES and education are plausibly strongly caused by it. In that case, controlling for SES and education and then saying "there's not much signal left, thus IQ is bunk" just means that you've basically renamed IQ into "the implicit thing that determines SES and education, which I will not calculate directly." With such related properties it becomes hard to determine causation in general. The obvious test would be to improve education and see if IQ shifts. I don't have a study for this on hand, but I expect (just to register my prediction, not to make an argument) that if I look I'll find that the signal is weak that direction.
tptacek 5 hours ago [-]
Or if IQ is less real than IQ-ists think it is, then it is plausibly strongly caused by SES and education. See?
I'm not even saying you're wrong (I think you are, but I don't have to defend that argument). I'm just saying the level of epistemic certainty you kicked this subthread off with was unwarranted. You know, "most reliable metrics in psychology" and all that.
FeepingCreature 5 hours ago [-]
I don't see how your argument puts my initial argument into doubt, tbh. If IQ isn't real but SES-and-education are, well then SES-and-education is the thing that you pick up at a glance. I'm not sure that the specific construction of causation here matters.
But also sure, I tend to assert my opinions pretty strongly in part to invite pushback.
tptacek 5 hours ago [-]
All I'm saying is that "most reliable metrics in psychology" is less a mic drop than that sentence would make it sound. The arrows of causality here are extremely controversial --- not politically, but scientifically.
FeepingCreature 5 hours ago [-]
Sure, that's fair. I kinda wish the topic wasn't politicized so we could just get scientists to hash it out without having to ask "is that a scientific conclusion or do you think it would be politically disadvantageous to come to another answer".
My own view is "IQ is real and massively impactful", because of the people I've read on the topic, my understanding of biology, sociology and history, and my experience in general, but I haven't kept a list of citations to document my trajectory there.
wizzwizz4 12 hours ago [-]
Your treatment of IQ is ridiculous. Give me access to a child for seven months, and I can increase their IQ score by 20 (and probably make myself an enemy in the process: IQ test drills are one of the dullest activities, since you can't even switch your brain off while doing them).
Intelligence is not a single axis thing. IQ test results are significantly influenced by socioeconomic factors. "Actual intelligence is hard to know" because it doesn't exist.
I have never yet known scientific racism to produce true results. I have known a lot of people to say the sorts of things you're saying: evidence-free claims that racism is fine so long as you're doing the Good Racism that Actually Works™, I Promise, This Time It's Not Prejudice Because It's Justified®.
No candidate genetic correlate of the g factor has ever replicated. That should be a massive flashing warning sign that – rather than having identified an elusive fact about reality that just so happens to not appear in any rigorous study – maybe you're falling afoul of the same in-group/out-group bias as nearly every group of humans since records begin.
Since I have no reason to believe your heuristic is accurate, we can stop there. However, to further underline that you're not thinking rationally: even if blue people were (on average) 2× as capable at spacial rotation-based office jobs than green people, it still wouldn't be a good idea to start with the skin colour prior and update from there, because that would lead to the creation of caste systems, which hinder social mobility. Even if scientific racism worked (which it hasn't to date!), the rational approach would still be to judge people on their own merits.
If you find it hard to assess the competence of your subordinates, to the point where you're resorting to population-level stereotypes to make hiring decisions, you're an incompetent manager and should find another job.
FeepingCreature 9 hours ago [-]
> Your treatment of IQ is ridiculous. Give me access to a child for seven months, and I can increase their IQ score by 20
That would be remarkable! Do you have a write-up/preprint on your protocol?
wizzwizz4 59 minutes ago [-]
There's nothing remarkable about it:
• Drill.
• Goto step 1.
Does this make the child "more intelligent"? Not in any meaningful way! But they get better at IQ tests.
It's a fairly common protocol. I can hardly be said to have invented it: I was put through it. (Sure, I came up with a few tricks for solving IQ-type problems that weren't in the instruction books, but those tricks too can be taught.)
I really don't understand why people think IQ test results are meaningful. They're among the most obvious cases of Goodhart's law that I know. Make up a sport that most kids won't have practised before, measure performance, and probably that's about as correlated with the (fictitious) "g factor" as IQ tests are.
FeepingCreature 32 minutes ago [-]
I mean, have you actually done this and have you documented your results? Preferably with a placebo and double blind?
The problem with "I've gone through this" is it's hard to analyze the counterfactual.
PaulHoule 9 hours ago [-]
From time to time I see a press release to the effect that some movie star (Alyssa Milano was one) got an IQ score around the max of Raven's Progressive Matrices. I bet somewhere there is a psychologist who will coach you on it, across the street one will test you, and on the next block a PR agency that will make the press release.
I mean, there aren't that many questions on Raven, you could memorize them all, particularly if you've got the kind of intelligence that actors have -- being able to memorize your lines. (And that's something, I have a 1950-ish book about the television industry that makes a point that people expect performers to be a "quick study", you'd better know your lines really well and not have to be told twice that you are expected to do this or that. That's different from, say, being able to solve really complex math problems.)
FeepingCreature 8 hours ago [-]
Isn't the point of Raven that you can synthesize variants at will? Or am I misunderstanding?
I'd consider it well plausible that top movie stars are also very smart.
gadders 17 hours ago [-]
As we all know, genetics and evolution only apply from the neck down.
dennis_jeeves2 15 hours ago [-]
True, nature is egalitarian although only intracranialy.
derangedHorse 23 hours ago [-]
Nothing about the article you posted in your first comment seems racist. You could argue that believing in the conclusions of Richard Lynn’s work makes someone racist, but to support that claim, you’d need to show that those who believe it do so out of willful ignorance of evidence that his science is flawed.
amarcheschi 22 hours ago [-]
Scott itself makes a point of the study being debated. It's not. It's not debated. It's pseudo science,or "science" made with so many questionable points that it's hard to call it "science". He links to a magazine article written by a researcher that has been fired, not surprisingly, for his pseudo scientific stances on racism https://en.m.wikipedia.org/wiki/Noah_Carl
Saying in 2025 that the study is still debated is not only racist, but dishonest as well. It's not debated, it's junk
wizzwizz4 19 hours ago [-]
It is debated: just not by serious scholars or academics. (Which doesn't necessarily make it wrong; but "scientific racism is bunk, and its proponents are persuasive" is a model whose high predictive power has served me well, so I believe it's wrong regardless.)
mjburgess 22 hours ago [-]
A lot of "rationalists" of this kind are very poorly informed about statistical methodology, a condition they inherit from reading papers written in these pseudoscientific fields about people likewise very poorly informed.
This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.
amarcheschi 22 hours ago [-]
I agree, I had to read things for an ethics course in IT in uni that read more like science fiction than actual science. Anyway, my point is that it feels pretentious - very pretentious, and I'm being kind with words - to support such pseudo scientific theories and call itself rationalist. Especially when these teories can be debunked just by reading the related Wikipedia page
saalweachter 19 hours ago [-]
More charitably, it is really, really hard to tell the difference between a crank kicked out of a field for being a crank, and an earnest researcher being persecuted for not towing the political line, without being an expert in the field in question and familiar with the power structures involved.
A lot of people who like to think of themselves as skeptical could also be categorized as contrarian -- they are skeptical of institutions, and if someone is outside an institution, that automatically gives them a certain credibility.
There are three or four logical fallacies in the mix, and if you throw in confirmation bias because what the one side says appeals to your own prior beliefs, it is really, really easy to convince yourself that you're the steely-eyed rationalist perceiving the world correctly while everyone else is deluded by their biases.
pixodaros 13 hours ago [-]
In that essay Scott Alexander more or less says "so Richard Lynn made up numbers about how stupid black and brown people are, but we all know he was right if those mean scientists just let us collect the data to prove it." The level of thinking most of us moved past in high school, and he is a MD who sees himself as a Public Intellectual! More evidence that thinking too much about IQ makes people stupid.
23 hours ago [-]
ineedaj0b 23 hours ago [-]
[flagged]
bee_rider 19 hours ago [-]
The main things I don’t like about rationalism are aesthetic (the name sucks and misusing the language of Bayesian probability is annoying). Sounds like they are a thoughtful and nice bunch otherwise(?).
retRen87 20 hours ago [-]
He already had a rationalist “coming out” like ages ago. Dude just make up your mind
While this was an interesting and enjoyable read, it doesn't seem to be a “rationalist ‘coming out’”. On the contrary, he's just saying he would have liked going to a ‘rationalist’ meeting.
retRen87 18 hours ago [-]
The last paragraph discusses how he's resisted the label and then he closes with “the rationalists have walked the walk and rationaled the rational, and thus they’ve given me no choice but to stand up and be counted as one of them.”
He’s clearly identifying as a rationalist there
kragen 18 hours ago [-]
Oh, you're right! I'd add that it's actually the penultimate paragraph of the first of two postscripts appended to the post. I should have read those, and I appreciate the correction.
retRen87 10 hours ago [-]
Ah good catch!!
scoofy 16 hours ago [-]
It's weird that "being interested in philosophy" is like... a movement. My background is in philosophy, but the rationalist vs nonrationalist debate seems like an undergraduate class dispute.
My old roommate worked for Open Phil, and was obsessed with AI Safety and really into Bitcoin. I never was. We still had interesting arguments about it all the time. Most of the time we just argued until we got to the axioms we disagreed on, and that was that.
You don't have to agree with the Rationalist™ perspective to apply philosophically rigorous thinking. You can be friends and allies with them without agreeing with all their views. There are strong arguments for why frequentism may be more applicable than bayesianism in different domains. Or why transhumanism is a pipe dream. They are still conversations that are worthwhile as long as you're not so confident in your position that you think you might learn something.
Aurornis 11 hours ago [-]
> It's weird that "being interested in philosophy" is like... a movement. My background is in philosophy, but the rationalist vs nonrationalist debate seems like an undergraduate class dispute.
Bring up the rationalist community within academic philosophy circles and you'll get a lot of groans.
The fun part about rationalists is that they like to go back to first principles and rediscover basics. The less fun part is that they'll ignore all of the existing work and pretend they're going to figure it all out themselves, often with weird results.
This leaves philosophy people endlessly frustrated as the rationalists write long essays about really basic philosophy concepts as if they're breaking new ground, while ignoring a lot of very interesting work that could have made the topic much more interesting to discuss.
timmytokyo 6 hours ago [-]
Rationalists are constantly falling into ditches that actual philosophers crawled out of centuries ago. But what's even more exasperating is that they do it in tendentious disquisitions that take thousands of wasted words to get.. to.. the.. everloving.. point.
achenet 47 minutes ago [-]
> they do it in tendentious disquisitions that take thousands of wasted words to get.. to.. the.. everloving.. point.
Right, and "actual philosophers" like Sartre and Heidegger _never_ did that. Ever.
"Being and Nothingness" and "Being and Time" are both short enough to fit into a couple tweets, right?
</irony>
scoofy 10 hours ago [-]
I mean, I don't disagree with you there. Even within academic philosophy circles, you'll get groans when one sect is discussing things with another sect. Lord knows ancient philosophy academics and analytic philosophy academics are going to get headaches just trying to hold a conversation... and we're not even including continental school.
My point is that, yes, while it may be a bit annoying in general (lord knows how many times I rolled my eyes at my old roommate talking about trans-humanism), the idea that this Rationalist™ movement "thinking about things philosophically" is controversial is just weird. That they seem to care about a philosophical approach to thinking about things, and maybe didn't get degrees and maybe don't understand much background while forming their own little school, seems as unremarkable is it is uncontroversial.
bargainbin 20 hours ago [-]
[flagged]
tomhow 10 hours ago [-]
Please don't comment like this on Hacker News. It breaks several guidelines, most notably these ones:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names.
Please don't fulminate. Please don't sneer...
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
KolibriFly 18 hours ago [-]
Sometimes the meta-skill of how you come across while being right is just as important as the correctness itself…
falcor84 19 hours ago [-]
I don't see how that's any more "wanker" then this famous saying by Socrates's; Western thought is wankers all the way down.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
lowbloodsugar 14 hours ago [-]
“I don’t like how they said it” and “I don’t like how this made me feel” is the aspect of the human brain that has given us Trump. As long as the idea that “how you feel about it” is a basis for any decision making, the world will continue to be fucked. The authors audience largely understand that “this made me feel” is an indication that introspection is required, and not an indication that the author should be ignored.
computerthings 13 hours ago [-]
[dead]
gadders 17 hours ago [-]
It's a coping mechanism for autists, mainly.
23 hours ago [-]
os2warpman 18 hours ago [-]
Rationalists as a movement remind me of the individuals who claim to be serious about history but are only interested in a very, VERY specific set of six years in one very specific part of the world.
And boy are they extremely interested in ONLY those six years.
KolibriFly 18 hours ago [-]
It's encouraging to hear that behind all the internet noise, the real-life community is thriving and full of people earnestly trying to build a better future
protocolture 11 hours ago [-]
"I have come out as a smart good thinking person, who knew"
>liberal zionist
hmmmm
Zaylan 8 hours ago [-]
Reading this made me realize that rationality isn’t really about joining a group. It’s more about whether you care about how you think. No matter how solid the logic is, if the starting assumptions are off, it doesn’t help much. Reality is often messier than any model we build.
How do you decide when to trust the model vs trust your instincts?
pja 19 hours ago [-]
Scott Aaronson, the man who turned scrupulosity into a weapon against his own psyche is a capital R rationalist?
Yeah, this surprises absolutely nobody.
great_tankard 18 hours ago [-]
"YOUR ATTENTION PLEASE: I have now joined the club everyone assumed I was already a member of."
mitthrowaway2 14 hours ago [-]
It's his personal blog, the only people whose attention he's asking for are the people choosing to wander over there to see what he's up to.
Not his fault that people deemed it interesting enough to upvote to the front page of HN.
norir 18 hours ago [-]
One of my many problems with rationalism is that it generally fails to acknowledge it's fundamentally religious character while pronouncing itself superior to all other religions.
nathias 2 hours ago [-]
I don't understand the urge for Americans to name things the opposite of what they are.
Mikhail_K 18 hours ago [-]
"Rationalists," the "objectivists" rebranded?
lanfeust6 14 hours ago [-]
Political affiliation distribution is similar to the general population.
old_man_cato 10 hours ago [-]
"I’m still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don’t even think I was the unique attendee in the intersection of them all)"
Not incidental!
throw7 18 hours ago [-]
I used to snicker at these guys, but I realized I'm not being humble or to be more theologically minded: gracious.
Recognizing we all take a step of faith to move outside of solipsism into a relationship with others should humble us.
apples_oranges 19 hours ago [-]
Never heard of the man, but that was a fun read. And it looks like a fun club to be part of. Until in becomes unbearable perhaps. Also raises the chances to get invited to birthday orgies..? Perhaps I should have stayed a in academia..
moolcool 19 hours ago [-]
> Until in becomes unbearable perhaps
Until?
bovermyer 15 hours ago [-]
I think I'm missing something important.
My understanding of "Rationalists" is that they're followers of rationalism; that is, that truth can be understood only through intellectual deduction, rather than sensory experience.
I'm wondering if this is a _different_ kind of "Rationalist." Can someone explain?
kurtis_reed 15 hours ago [-]
It's a terrible name that collides with the older one you're thinking of
FeteCommuniste 14 hours ago [-]
The easiest way to understand their usage of the term "rational" might be to think of it as the negation of the term "irrational" (where the latter refers mostly to cognitive biases). Not as a contrast with "empirical."
lasersail 12 hours ago [-]
I was at Lighthaven that week. The weekend-long LessOnline event Scott references opened what LightHaven termed "Festival Season", with a summer camp organised for the following 5 week days, and a prediction market & forecasting conference called Manifest the following weekend.
I didn't attend LessOnline since I'm not active on LessWrong nor identify as a rationalist - but I did attended a GPU programming course in the "summer camp" portion of the week, and the Manifest conference (my primary interest).
My experience generally aligns with Scott's view, the community is friendly and welcoming, but I had one strange encounter. There was some time allocated to meet with other attendees at Manifest who resided in the same part of the world (not the bay area). I ended up surrounded by a group of 5-6 folks who appeared to be friends already, had been a part of the Rationalist movement for a few years, and had attended LessOnline the previous weekend. They spent most of the hour critiquing and comparing their "quality of conversations" at LessOnline with the less Rationalist-y, more prediction market & trading focused Manifest event. Completely unaware or unwelcoming of my presence as an outsider, they essentially came to the conclusion that a lot of the Manifest crowd were dummies and were - on average - "more wrong" than themselves. It was all very strange, cult-y, pseudo-intellectual, and lacking in self-awareness.
All that said, the experience at Summer Camp and Manifest was a net positive, but there is some credence to sneers aimed at the Rationalist community.
20 hours ago [-]
stuaxo 15 hours ago [-]
Had to stop reading, everyone sounded so awful.
gblargg 4 hours ago [-]
> The closest to right-wing politics that I witnessed at LessOnline was a session, with Kelsey Piper and current and former congressional staffers, about the prospects for moderate Democrats to articulate a moderate, pro-abundance agenda that would resonate with the public and finally defeat MAGA.
I can't say I'm surprised.
aosaigh 20 hours ago [-]
> “You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
Give me strength. So much hubris with these guys (and they’re almost always guys).
I would have assumed that a rationalist would look for truth and not correctness.
Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.
stevenhuang 11 hours ago [-]
Pretty sure that's meant to be taken light-heartedly. You know, as a joke.
api 20 hours ago [-]
That's exactly what Rationalism(tm) is.
The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.
Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.
Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"
Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.
cshimmin 18 hours ago [-]
Thank you for finally making this make sense to me.
api 15 hours ago [-]
Another thing that's endemic in Rationalism is a kind of specialized variety of the Gish gallop.
It goes like this:
(1) Assert a set of priors (with emphasis on the word assert).
(2) Reason from those priors to some conclusion.
(3) Seamlessly, without skipping a beat, take that solution as valid because the reasoning appears consistent and make that part of a new set of priors.
(4) Repeat, or rather recurse since the new set of priors is built on previous iterations.
The entire concept of science is founded on the idea that you can't do that. You have to stop and touch grass, which in science means making observations or doing experiments if possible. You have to see if the conclusion you reached actually matches reality in any meaningful way. That's because reason alone is fragile. As any programmer knows, a single error or a single mistaken prior propagates and renders the entire tree invalid. Do this recursively and one error anywhere in this crystalline structure means you've built a gigantic tower of bullshit.
I compare it to the Gish gallop because of how enthusiastically they do it, and how by doing it so fast it becomes hard to try to argue against. You end up having to try to counter a firehose of Oh So Very Smart complicated exquisitely reasoned nonsense.
Or you can just, you know, conclude that this entire method of determining truth is invalid and throw the entire thing in the trash.
A good "razor" for this kind of thing is to judge it by its fruit. So far the fruit is AI hysteria, cults like the Zizians, neoreactionary political ideology, Sam Bankman Fried, etc. Has anything good or useful come from any of this?
ModernMech 20 hours ago [-]
Rationalists are better called Rationalizationists, really.
khazhoux 11 hours ago [-]
I would love a link to anything that could convince me the Rationalist community isn't just a bunch of hoo-haw.
I read the first third of HPMOR. I stopped because I found the writing poor, but more importantly, it didn't "open my mind" to any higher-order way of rationalist thinking. My takeaway was "Yup, the original HP story was full of inconsistencies and stupidities, and you get a different story if the characters were actually smart."
I've read a bunch of EY essays and a lot of lesswrong posts, trying to figure out what is the mind-shattering idea.
* The map is not the territory --> of course it isn't.
* Update your beliefs based on evidence --> who disagrees with this? (with exception on religion)
* People are biased and we need to overcome that --> another obvious statement
* Make decisions based on evidence and towards your desired outcomes --> thanks for the tip?
Seems to me this whole philosophy can be captured in about half page of notes, which most people would nod and say "yup, makes sense."
voidhorse 23 hours ago [-]
These kinds of propositions are determined by history, not by declaration.
Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.
Sharlin 23 hours ago [-]
Bleh, labels can be restrictive, but guess what labels can also be? Useful.
23 hours ago [-]
resource_waste 20 hours ago [-]
>These kinds of propositions are determined by history, not by declaration.
A single failed prediction should revoke the label.
The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.
stephc_int13 10 hours ago [-]
This is largely a cult, showing most of the red flags.
But if we put aside the narcissistic traits, lack of intellectual humility, religious undertones and (paradoxically) appeal to emotional responses with apocalyptic framing, the whole thing is still irrelevant BS.
They work in a vacuum, on either false or artificial premises with nothing to back their claims except long strings of syllogism.
This is not Science, no measurements, no experiments, no validation, zero value apart from maybe intellectual stimulation and socialisation for nerds with too much free time…
dr_dshiv 20 hours ago [-]
Since intuitive and non-rational thinking are demonstrably rational in the face of incomplete information, I guess we’re all rationalists. Or that’s how I’m rationalizing it, anyway.
Joker_vD 22 hours ago [-]
Ah, so it's like the Order of the October Star: certain people have simply realized that they are entitled to wear it. Or, rather, that they had always been entitled to wear it. Got it.
jrd259 17 hours ago [-]
I'm so out of the loop. What is the new, special sense of Rationalist over what it might have meant to e.g. Descarte?
19 hours ago [-]
babuloseo 9 hours ago [-]
I stopped reading once I read the word "zionist"
akomtu 11 hours ago [-]
Sounds like they hear only themselves? There is a common type of sophisticated thinkers who have trained their intellectual muscle to a remarkable degree and got stuck there, refusing to see that intellect isn't the capstone of life. Their mind, ears and mouth quickly form a closed loop that makes them hear only what they themselves say. When this loop strengthens, this thinker becomes a dogmatic cult leader with a sophisticated, but dimly lit inner world that can, nevertheless impress smaller minds. Such inner worlds are always like mazes, with lots of rooms and corridors, all dimly lit, without exits and with their impressively developed mind roaming along these corridors like the Minotaur.
anonnon 17 hours ago [-]
Does that mean he read the Harry Potter fanfic?
tines 11 hours ago [-]
The HP fanfic is what decisively drove me away from this shitshow years ago. I'm so glad I read that first rather than getting sucked in through another more palatable route.
mkoubaa 18 hours ago [-]
The problem with rationalism is we don't have language to express our thoughts formally enough nor a compiler to transform that language into something runnable (platonic AST) nor a machine capable of emulating reality.
Expecting rational thought to correspond to reality is like expecting a 6 million line program written in a hypothetical programming language invented in the 1700s to run bug free on a turing machine.
Tooling matters.
danans 16 hours ago [-]
> A third reason I didn’t identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru.
Apart from a charismatic leader, a cult (in the colloquial meaning) needs a business model, and very often, a sense of separation from, and lack of accountability to those who are outside the cult, which provides conveniently simpler environment under which the cults ideas operate. A sort of "complexity filter" at the entry gate.
I'm not sure how the Rationalists compare to those criteria, but I'd be curious to find out.
Barrin92 19 hours ago [-]
>"frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future"
One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.
I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.
Terr_ 10 hours ago [-]
This also related to that earlier bit:
> I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?
The reason they landed on a not-so-rational risk to humanity is because it fulfilled the psycho-social need to have a "terrible burden" that binds the group together.
It's one of the reasons religious groups will get caught up on The Rapture or whatever, instead of eradicating poverty.
16 hours ago [-]
20 hours ago [-]
d--b 20 hours ago [-]
Sorry, I haven't followed what is it that these guys call Rationalism?
Fair warning: when you turn over some of the rocks here you find squirming, slithering things that should not be given access to the light.
nosrepa 13 hours ago [-]
And Harry Potter fan fiction.
achenet 38 minutes ago [-]
I think GP already covered that:
> squirming, slithering things that should not be given access to the light.
;)
d--b 18 hours ago [-]
thanks much
cess11 14 hours ago [-]
The narcissism in this movement is insufferable. I hope the conditions for its existence will soon pass and give way to something kinder and more learned.
resource_waste 20 hours ago [-]
"I'm a Rationalist"
"Here are some labels I identify as"
So they arent rational enough to understand first principles don't objectively exist.
They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.
I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.
I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.
"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."
17 hours ago [-]
absurdo 14 hours ago [-]
What the fuck am I reading lmao.
Fraterkes 23 hours ago [-]
[flagged]
codehotter 23 hours ago [-]
I view this as a political constraint, cf. https://www.astralcodexten.com/p/lifeboat-games-and-backscra.... One's identity as Academic, Democrat, Zionist and so on demands certain sacrifices of you, sometimes of rationality. The worse the failure of empathy and rationality, the better a test of loyalty it is. For epistemic rationality, it would be best to https://paulgraham.com/identity.html, but for instrumental rationality it is not. Consequently, many people are reasonable only until certain topics come up, and it's generally worked around by steering the discussion to other topics.
Fraterkes 22 hours ago [-]
I don’t really buy this at all: I am more emotionally invested in things that I know more about (and vice versa). If Rationalism breaks down at that point it is essentially never useful.
lcnPylGDnU4H9OF 20 hours ago [-]
> I don’t really buy this at all
For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.
Fraterkes 17 hours ago [-]
My reading of the comment was that there was only a small subset of contentious topics that rationalism is unsuited for. But I think you are correct
voidhorse 22 hours ago [-]
And this is precisely the problem with any dogma of rationality. It starts off ostensibly trying to help guide people toward reason but inevitably ends up justifying blatantly shitty social behavior like defense of genocide as "political constraint".
These people are just narcissists who use (often pseudo)intellectualism as the vehicle for their narcissism.
tome 22 hours ago [-]
I'm curious how you assess, relatively speaking, the shittiness of defence of genocide versus false claims of genocide.
voidhorse 22 hours ago [-]
Ignoring the subtext, actual genocide is obviously shittier and if you disagree I doubt I could convince you otherwise in the first place.
Wouldn’t it be better to spend the time understanding the reality of the situation in Gaza from multiple angles rather than philosophizing on abstract concepts? I.e. there are different degrees of genocide, but that doesn’t matter in this context because what’s happening in Gaza is not abstract or theoretical.
In other words, your question ignores so much nuance that it’s a red herring IMO.
tome 19 hours ago [-]
[flagged]
SantalBlush 18 hours ago [-]
If you suggest an answer to your own question, it could be disputed. Better to make a coy comment and expect others to take that risk.
tome 18 hours ago [-]
I'm not the one making pronouncements about the "shittiness" of forms of human behaviour.
SantalBlush 16 hours ago [-]
God forbid.
tome 14 hours ago [-]
Sorry, I'm not sure what you mean. Could you explain what you mean by "god forbid" in this context?
voidhorse 16 hours ago [-]
Assuming you do believe that genocide is extremely shitty, wouldn't that imply that defense of (actual) genocide, or the principle of it, is in all likelihood shitter than a false accusation of genocide? Otherwise I think you'd have to claim that a false accusation is somehow worse than the actuality or possibility of mass murder, which seems preposterous if you have even a mote of empathy for your fellow human beings.
As others have pointed out, the fact that you would like to make light of cities being decimated and innocent civilians being murdered at scale in itself suggests a lot about your inability to concretize the reality of human existence beyond yourself (lack of empathy). It's this kind of outright callousness toward actual human beings that I think many of these so called "rationalists" share. I can't fault them too much. After all, when your approach to social problems is highly if not strictly quantitative you are already primed to nullify your own aptitude for empathy, since you view other human beings as nothing more than numerical quantities whenever you attempt to address their problems.
I have seen no defense for what's happening in gaza that anyone who actually values human life, for all humans, would find rational. Recall the root of the word ratio—in proportion. What is happening in this case is quite blatantly a matter of an inproportinate response.
kombine 20 hours ago [-]
We have concrete examples of defence of genocide, such as by Scott Aaronson. Can you provide the examples of "false accusations of genocide", otherwise this is a hypothetical conversation.
tome 20 hours ago [-]
I can certainly agree we have a concrete example of defence of purported genocide and a concrete example of an accusation of purported genocide. Beyond that I'd be happy to discuss further (although it's probably off topic).
Ar-Curunir 16 hours ago [-]
Do you have any beliefs beyond just obfuscatory both-sideism and genocide apologia?
tome 14 hours ago [-]
[flagged]
Ar-Curunir 16 hours ago [-]
This is bullshit both-sidesism. Many many independent (and Israeli!) observers have clearly documented the practices that Israel is doing in Palestine, and these practices all meet the standard definition of genocide.
tome 14 hours ago [-]
Interesting conclusion, since I didn't make a claim either way.
Still, for the record, other independent observers have documented the practices and explained why they don't meet the definition of genocide, John Spencer and Natasha Hausdorff to name two examples. It seems by no means clear that it's valid to make a claim of genocide. I certainly wouldn't unless I was really, really certain of my claim, because to get such a claim wrong is equally egregious to denying a true genocide, in my opinion.
voidhorse 9 hours ago [-]
I really think you should consider whether you are being intellectually honest to yourself and others. Here is the dictionary definition of genocide:
> the deliberate killing of a large number of people from a particular nation or ethnic group with the aim of destroying that nation or group.
I didn't read the arguments of the people you cited because you didn't bother to link to them, but looking at their credentials, I'm not sure either of them are in a position to solidly refute the scores of footage and testimony to the contrary. The only people I've seen defending this have some kind of personal attachment to Israel, but come on, man, you can maintain that attachment and still condemn horrific acts committed by the state when they happen.
tome 3 hours ago [-]
Sorry, what in my post suggests intellectual dishonesty?
Under your dictionary definition, would you say that the Allies in WWII committed genocide of Germans and Japanese? If you say "yes", then I suppose you're entitled to your interpretation, and I know exactly how seriously to take it. If you say "no" then perhaps you could explain what is the difference between WWII (where proportionally far more civilians were killed) and the current Gaza war.
> you can maintain that attachment and still condemn horrific acts committed by the state when they happen
I agree. And one can refrain from condemning horrific acts that one is not certain happened.
I'm happy to continue the discussion if you like, but all I'm really after is your answer to this simple question: do you see defence of true genocide as equally bad as false accusation of genocide? I would be content with a simple yes or no. Or does the question make you uncomfortable somehow?
SantalBlush 9 hours ago [-]
>Natasha Hausdorff (born October 1989) is a British barrister, international law expert, and member of pro-Israel lobbying group UK Lawyers for Israel. [0]
>John W. Spencer is a retired United States Army officer, researcher of urban warfare, and author. [1]
lmao, if I find two people from the military or law who deny the holocaust, then will you deny it too? Actually, maybe you shouldn't answer that.
I'm happy to answer that. No I wouldn't. The Holocaust is probably the most studied subject in human history. It's not as though there is much uncertainty about the major events of it.
In any case, this is really going off topic. All I am interested in is in voidhorse's answer to my simple question. That doesn't require retreading many of the the dark corners of human history.
Fraterkes 23 hours ago [-]
(Ive also been somewhat dogmatic and angry about this conflict, in the opposite direction. But I wouldnt call myself a rationalist)
skybrian 21 hours ago [-]
Anything in particular you want to link to as unreasonable?
pbiggar 23 hours ago [-]
[flagged]
zaphar 23 hours ago [-]
I'm not a Rationalist, however, nothing you said in your first paragraph is factual and therefore the resultant thesis isn't supported. In fact it ignores nearly 2-3000 years of history and ignores a whole bunch of surrounding context.
simiones 22 hours ago [-]
The 2-3000 years of history are entirely and wholly irrelevant. Especially as history shows clearly that the Palestinians are just as much the descendants of the ancient Israelites as the Jewish diaspora that returned to their modern land after the founding of modern Israel. The old population from before the Roman conquest never disappeared - some departed and formed the diaspora, but most stayed. Some converted to Christianity during this time as well. Later, they were conquered by Mohammed and his Caliphate, and many converted to Islam, but they're still the same people.
tome 22 hours ago [-]
Is your claim that genetics determines who should live in a particular patch of land?
simiones 22 hours ago [-]
No, not genetics, but heritage is a valid, and very commonly used, criterion.
I.e., the following is, I believe, a reasonable argument:
"I should have a right to live in this general patch of land, since my grand-parents lived here. Maybe my parents moved away and I was born somewhere else, but they still had a right to live here and I should have it too. I may have to buy some land to have this right, I'm not saying I should be given land - but I should be allowed to do so. Additionally, it matters that my grand-parents were not invaders to this land. Their parents and grand-parents had also lived here, and so on for many generations."
This doesn't imply genetic heritage necessarily - cultural heritage and the notions of parents are not necessarily genetic. I might have ~0% of the specific DNA of some great-great-grand-parent (or even 0% of my parents' DNA, if I am adopted) - but I'm still their descendant. Now, how far you want to stretch this is very much debatable.
tome 22 hours ago [-]
This seems at odds with your earlier claim that "The 2-3000 years of history are entirely and wholly irrelevant". In fact the history of those 2-3000 years seems essential to determining "heritage".
simiones 21 hours ago [-]
That was in response to the argument that I believe the GP was making by bringing into discussion this timeline. The typical way this is presented by adherents of Zionism is something like "the Jewish people are the original people who lived in Israel/were given it by God; this land was stolen from the Jewish people and they were expelled, first by the Romans and then by the Arabs; the foundation of modern-day Israel marked the return home of the Jewish people, as was their right by their 2-3000 years of having lived there; the Palestinians were just the latest population living on this stolen land". By this logic, they then claim that Israel are not occupying any land, even Gaza or the West Bank, it is the Palestinians who had been occupying the land of Israel.
My claim is that this is factually incorrect by any stretch of the imagination, as soon as we recognize that the modern-day Palestinians and the modern-day Jewish people are just as much descendants of the ancient Israelites. Just because their language, culture, and religion have diverged, there is nothing that ties one group more to that land than the other (if anything, those that had left have a lesser tie than those that stayed, even if the culture of those that stayed diverged). So the claim of descent and continuity with the ancient kingdom of Israel, the 2-3000 year old history, is entirely irrelevant.
tome 21 hours ago [-]
Are you responding to an argument zaphar didn't make? He/she just said "your first paragraph ... ignores nearly 2-3000 years of history", which is true. Now you seem to be saying "if you look at the first 2-3000 years of history you will see that the first 2-3000 years of history are irrelevant", which is about as self-defeating as an argument can possibly be!
simiones 1 hours ago [-]
Zaphar didn't make any argument, they only implied one. They said that the previous poster was wrong about everything, and then brought up the previous 2-3000 years of history as some vague justification for that with no actual argument.
I responded to the most plausible interpretation of what the 2-3000 years of history could have to do with the previous poster being wrong about Israel occupying the lands of the Palestinian people.
And again, as to the claim: I'm basically saying that the 2-3000 years of history don't, in fact, justify the occupation - so just forgetting about them and focusing on what is actually happening today (a population is being kept in an occupied pseudo-state that they aren't allowed to leave) is enough to understand the whole situation, and who is in the right and who is in the wrong. So the 2-3000 years of history are irrelevant, because they don't overturn the easily visible conclusion you would draw.
Of course, in every conflict, the history is interesting and enlightening in some ways. But, unless the history changes the light in which you view the current conflict, it's irrelevant to the question of "who is the oppressor and who is the oppressed?".
komali2 19 hours ago [-]
I've observed you making multiple comments like this throughout the entire thread - nothing but muddying the waters. You're engaging in incredibly bad faith.
tome 19 hours ago [-]
It's extremely presumptuous to declare I'm engaging in bad faith (especially "incredibly bad"). I can assure you I'm engaging in good faith, and attempting to seek some clarity on the assumptions underlying the conclusions a few people here have reached. Naturally, it is often very uncomfortable to be challenged on deep assumptions.
komali2 19 hours ago [-]
No, I'm correct, and you're taking it further here. Lots of words to fill space, it's actually one of the most inscrutable forms of trolling I've ever seen on the internet, so I congratulate you for that.
You're purposefully misinterpreting multiple comments. The "That was in response to the argument that I believe the GP was making by bringing into discussion this timeline" comment shouldn't have even been necessary, at that point you'd already purposefully misinterpreted a very clear comment, but even if that had been a mistake on your part, simiones clearly explained the conversation to you, and yet you purposefully misinterpreted it again.
If you're going to keep up this tact, my challenge to you for your next comment to me is to find a way to convince me to waste even more time replying to you. I only wanted at least someone to put a foot down and point out what you're doing. Something like pedantry, but combined with bad faith interpretations, and even more annoying for it.
simiones says: "The 2-3000 years of history are entirely and wholly irrelevant", and also makes some suggestion that I believed could indicate that genetic heritage was what determines which people should live where.
However, he subsequently clarified in https://news.ycombinator.com/item?id=44318095 that he was not making that claim. Yet in doing so claimed that "This doesn't imply genetic heritage necessarily - cultural heritage and the notions of parents are not necessarily genetic", drawing on the notion of "culture". Now, cultural heritage very specifically implies that history is relevant, because it's something passed down over centuries.
I then challenged him that his invocation of cultural heritage was in opposition to his earlier claim that "The 2-3000 years of history are entirely and wholly irrelevant" (https://news.ycombinator.com/item?id=44318122) to which he responded that "That was in response to the argument that I believe the GP was making" (https://news.ycombinator.com/item?id=44318317), but that's a complete presumption. The GP hadn't presented any specific argument, merely factually pointed out that some long stretch of history was missing from the analysis of pbiggar, so I asked simiones if he was responding to an argument not actually made by zaphar. Furthermore, I reiterated that simiones seemed to have defeated his own claim that "The 2-3000 years of history are entirely and wholly irrelevant".
This is where the original discussion ends, and you entered the thread. I see you making a number of unsubstantiated accusations of bad faith and trolling, but not actually engaging in the discussion of the topic at hand.
So, I have presented here a summary of a thread that highlights my process of rational enquiry. I don't see here what could be taken as bad faith or trolling. Maybe you can explain further? Or perhaps maybe you can you engage with the topic at hand? I would be willing to (though it goes rather far off the original topic).
komali2 4 hours ago [-]
Thank you, in one sense you failed the challenge because I'm not interested in engaging in your deeper trolling, however you have reminded me how much of a waste of time posting on the internet is in general. I needed the reminder.
May your endless paragraphs continue to alleviate the suffering of the Palestinians experiencing genocide, or the poor Israelis who are sad because they have to do a genocide because whatever ethnostatist reason, or whatever it is you believe - from your post history, I'd guess Israel Enjoyer, but from the threads here it's anyone's guess. The benefit of being a smug Socratic type engaged in pedantry is you can never be accused of having the wrong values, since from initial appearances, you have none.
tome 3 hours ago [-]
> Thank you, in one sense you failed the challenge because I'm not interested in engaging in your deeper trolling
And yet you replied, curious. In any case, that's OK, since I wasn't responding in order to meet your challenge.
It's curious that you accuse me of having "no values" and of being a "Socratic type". I assumed that, on Hacker News, a forum reputed for its willingness to engage intellectually, a simple challenge to someone's argument would receive a simple response. I assumed that rational debate, free of emotive diversions, was welcomed here. Why would "my values" be relevant? Surely establishing a rational dialogue is what's important on Hacker News. This isn't Reddit, where the standard of dialogue is typically much lower.
simiones could have said "oh yes, you're right, the last 2-3,000 years of history are relevant". Or he could have continued by providing more rationale that they're not. Yet neither he nor anyone else has responded to my observation, instead I just received comments targeted personally at me.
It makes me wonder whether one side of this debate actually has substance to back up its beliefs and actions.
Of course, you have no obligation to respond. If you do respond, I would appreciate it if you would make it about substantive, rational arguments, not personal comments.
atwrk 22 hours ago [-]
Not interested in discussing that topic here, but that is precisely the kind of category error that would fit right in with the rationalist crowd: GP was talking about human rights, i.e. actual humans, you are talking about nations or peoples, which is an entirely orthogonal concept.
phgn 23 hours ago [-]
Very well put.
skippyboxedhero 23 hours ago [-]
[flagged]
simiones 22 hours ago [-]
While both sides have been engaged in crimes against humanity, only one is engaged in a violent occupation, by any stretch of the imagination.
kombine 23 hours ago [-]
[flagged]
komali2 19 hours ago [-]
What's incredible to me is the political blindness. Surely at this point, "liberal zionists" would at least see the writing on the wall? Apply some Bayesian statistical analysis to popular reactions to unprompted military strikes against Iran or something, they should realize at this point that in 25 years the zeitgeist will have completely turned against this chapter in Israel's history, and properly label the genocide for what it is.
I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?
VincentEvans 16 hours ago [-]
[dead]
throwaway984393 20 hours ago [-]
[dead]
unit149 17 hours ago [-]
[dead]
bdbenton5255 14 hours ago [-]
[flagged]
paganel 23 hours ago [-]
[flagged]
phgn 23 hours ago [-]
Thank you for sharing the link.
It's very hard for me to take anyone seriously who doesn't speak out against the genocide. They're usually arguing about imaginary problems.
("if the will exists" in the article puts the blame for the situation on one side, which is inacceptable)
honeybadger1 23 hours ago [-]
[flagged]
LastTrain 23 hours ago [-]
[flagged]
meindnoch 16 hours ago [-]
[flagged]
musha68k 17 hours ago [-]
Very Bay Area to assume you invented Bayesian thinking.
That's a different definition of rationalism from what is used here.
AnimalMuppet 20 hours ago [-]
It is. But the Rationalists, by taking that name as a label, are claiming that they are what the GP said. They want the prestige/respect/audience that the word gets, without actually being that.
FeepingCreature 16 hours ago [-]
(The rationalists never took that label, it is falsely applied to them. The project is called rationality, not rationalism. Unfortunately, this is now so pervasive that there's no fixing it.)
AnimalMuppet 16 hours ago [-]
Hmm, interesting. Might I trouble you for your definitions of rationality and rationalism?
(Not a "gotcha". I really want to know.)
FeepingCreature 16 hours ago [-]
Sure! Rationality is what Eliezer called his project about teaching people to reason better (more empirically, more probabilistically) in the events I described over here: https://news.ycombinator.com/item?id=44320919 .
I don't know rationalism too well but I think it was a historical philosophical movement asserting you could derive knowledge by reasoning from axioms rather than observation.
The primary difference here is that rationality mostly teaches "use your reason to guide what to observe and how to react to observations" rather than doing away with observations altogether; it's basically an action loop alternating between observation and belief propagation.
A prototypical/mathematical example of a pure LessWrong-type "rational" reasoner is Hutter's AIXI (a definition of the "optimal" next step given an input tape and a goal), though it has certain known problems of self-referentiality. Though of course reasoning in this way does not work for humans; a large part of the Sequences is attempts to port mathematically correct reasoning to human cognition.
You can kind of read it as a continuation of early-2000s internet atheism: instead of defining correct reasoning by enumerating incorrect logic, ie. "fallacies", it attempts to construct it positively, by describing what to do rather than just what not to do.
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
There is a term for this. "Getting stuck up your own butt." It wouldn't be so bad except that said people often take on an air of absolute superiority because they used "only logic" and in their head they can not be wrong. Many people end up thinking like this as teenagers or 20 somethings, but most will have someone in their life who smacks them over the head and tells them to stop being so foolish, but if you have enough money and the Internet you can insulate yourself from that kind of oversight.
People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
I quickly lost interest in Roko's Basilisk, but that is what brought me in the door and started me looking around the discussions. At first, it was quite seductive. There was a strange fearlessness there, a willingness to say and admit some things about humanity, our limitations and how we tend to think that other great thinkers maybe danced around in the past. After awhile it became clear that while there were a select few individuals who had found some balance between purely rational thinking and how reality actually works, most of the rest had their heads so far up their asses that they'd fart and call it a cool breeze. Reminded me of my brief obsession with Game Theory and realizing that even it's creators knew it's utility was not quite as advertised to the layman (as in it would not really help you predict or plan for anything at all, just model how decisions might be made).
The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.
The real conflict here is between Darwinism and enlightenment ideals. But I have yet to see any self-styled Rationalists take this seriously.
Emotionally I don’t subscribe to this view. Rationally I do.
My critique for rational people is that they don’t seem to fully take experience into account. It’s assumptions + rationality + experience/data + whatever strong inclinations one has that seems to be the full picture for me.
That always seemed like a meaningless argument to me. To an outside observer free will is indistinguishable from a random process over some range of possibilities. You aren’t going to randomly go to sleep with your hand in a fire, there’s some hard coded biology preventing that choice but that only means human behavior isn’t completely random, hardly a groundbreaking discovery.
At the other end we have no issues making an arbitrary decision where there’s no way to predict what the better choice is. So what exactly does free will bring to the table that we’re missing without it? Some sort of mystical soul, well what if that’s also deterministic? Unpredictability is useful in game theory, but computers can get that from a hardware RNG based on quantum processes like radioactive decay, so it doesn’t mean much.
Finally, subjectively the answer isn’t clear so what difference does it make?
E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.
Similarly, wealth disparities can't be excused by someone choosing to work harder, because they had no agency in the "decision".
You can still justify some degree of punishment and reward, but a lack of free will changes which justifications are reasonable very substantially.
E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals, and that has emotionally difficult consequences. For example, for non-premeditated murders carried out out of passion rather than e.g. gang crimes, the odds of someone carrying out another is extremely low and the odds that the fear of a long prison sentence is an actual deterrence is generally low, and and so long prison terms are hard to justify once vengeance is off the table.
And so holding on to a belief in free will is easier to a lot of people than the alternative.
My experience is that there are few issues where people so easily get angry than if you suggest we don't have free will once they start thinking through the consequences (and some imagined ones...).
For example
> E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.
and
> E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals
are simply logical to me (even without assuming any lack of free will).
So what is emotionally difficult about this, as you claim?
Same as that is not the lived experience. I notice that I care about free choice.
The idea that there's no free will may be a pessimistic outlook to some but to me it's a strictly neutral one. It used to be a bit negative, until I looked more closely that there's a difference between looking at a situation objectively and having a lived experience. When it comes to my inclinations and how I want to live life, lived experience takes precedence.
I don't have my thoughts sharp on it, but I don't think the concept even exists philosophically, but I think that's also what you're getting at. It's a conceptual remnant from the past.
But though that is the colloquial meaning, it doesn't line up with what people say they want: you want to make your choice according to your own reasons. You want free choice. But unless your own reasoning includes a literal throw of the dice, your justifications deterministically decide the outcome.
"Free will" is the ability to make your own choices, and for most people most of the time, those choices are deterministic given the options and knowledge available. Free will and determinism are not only compatible, but necessarily so. If your choices weren't deterministic, it wouldn't be free will.
But when you probe people, while a lot of people will argue in ways that a philosopher might call compatibilist, my experience is that people will also strongly resist the notion that the only options are randomness and determinism. A lot of people have what boils down to a religious belief in a third category that is not merely a combination of those two, but infuses some mysterious third options where they "choose" that they can't explain.
Most of the time, people who believe there is no free will (and can't be), like me, take positions similar to what you described, that - again - a proponent of free will might describe as compatibilist, but sometimes we oppose the term for the reason above: A lot of people genuinely believe in a "third option" for choices are made.
And so there are really two separate debates on free will: Does the "third option" exist or not, and does "compatibilist free will" exist or not. I don't think I've ever met anyone who seriously disagrees that "free will" the way compatibilists define it exists, so when compatibilists get into arguments over this, it's almost always a misunderstanding...
But I have met plenty of people who disagree with the notion that things are deterministic "from the outside".
Since there's no free will, outcomes are determined by luck, and what matters is how lucky we can make people through pit-of-success environments. Rust makes people luckier than C++ does.
I also have much less patience for blame than I do in a world with free will. I believe, for example, that blameless postmortems lead to much better outcomes than trying to pretend people had free will to make mistakes, and therefore blaming them for those mistakes.
You can get to these positions through means other than rejection of free will, but the most robust grounds for them are fundamentally deterministic.
What I'm saying is that there's no logical point to the concept "should" unless you have some concept of free will: everything that happens must happen, or is entirely random.
What do you mean by that? It still exists doesn't it? Albeit in a probabilistic sense that becomes non-probabilistic at larger scales.
I don't know much about quantum other than the high level conceptual stuff.
It's controversial, but here is the argument that the answer is "no": See https://flownet.com/ron/QM.pdf
Or if you prefer a video: https://www.youtube.com/watch?v=dEaecUuEqfc
>Under QIT, a measurement is just the propagation of a mutually entangled state to a large number of particles.
eyeroll so it's MWI in disguise, but MWI is quantum realism. Illusion they talk about is that the observed macroscopic state is a part of the bigger superposition (incomplete observation). But that's dumb, even if it's a part of a bigger state, it's still real, because it's not made up, but observed.
https://www.lesswrong.com/s/kNANcHLNtJt5qeuSS
"Moloch hasn't won" is a lengthy critique of the argument you are making here.
Why can't that observation be taken into account? Isn't the entire point of the approach accounting for all inputs to the extent possible?
I think you are making invalid assumptions about the motivations or goals or internal state or etc of the actors which you are then conflating with the approach itself. That there are certain conditions under which the approach is not an optimal strategy does not imply that it is never competitive under any.
The observation is then that rationalism requires certain prerequisites before it can reliably out compete other approaches. That seems reasonable enough when you consider that a fruit fly is unlikely to be able to successfully employ higher level reasoning as a survival strategy.
Of course it can be. I'm saying that AFAICT it generally isn't.
> rationalism requires certain prerequisites before it can reliably out compete other approaches
Yes. And one of those, IMHO, is explicit recognition that rationalism does not triumph simply because it is rational, and coming up with strategies to compensate. But the rationalist community seems too hung up on things like malicious AI and Roko's basilisk to put much effort into that.
I'm sympathetic to the idea that we know nothing because of the reproductive impulse to avoid doing or thinking about things that led our ancestors to avoid procreation, but such a conclusion can't be total because otherwise it is self defeating because is is contingent on rationalist assumptions about the mind's capacity to model knowledge.
Even then that might not always be the case. Sometimes there are severe time or bandwidth or energy or other constraints that preclude carefully collecting data and thinking things through. In those cases a heuristic that is very obviously not derived from any sort of critical thought process might well be the winning strategy.
There will also be cases where the answer provided by the rational approach will be to conform to some other framework. For example where cult type ingroup dynamics are involved across a large portion of the population.
The fact that you can write this sentence, consider it to be true, and yet still hold in your head the idea that the future might be bad but it's still important to have children suggests that "contact with reality" is not a curse.
If gender equality and intellectual achievement don't produce children, then that isn't "darwinism selecting rationality out". You can't expect the continued existence of finite lifespan organisms if there are no replacement organisms. Raising children is hard work. The people who believe in gender equality and intellectual achievement made the decision to not want more of themselves, particularly when their belief in gender equality entails not wanting male offspring. The alternative is essentially freeloading and expecting others, who do not share the beliefs, to produce children for you and also to teach them the "enlightened" belief of forcing "enlightened" beliefs onto others (note the circularity, the initial conditions are usually irrelevant and often just a fig leaf to perpetuate the status quo).
I'm not sure I entirely understand what you're arguing here, but I absolutely do agree that the most powerful force in the universe is natural selection.
The term "survival of the fittest" predates Darwin's Origin of Species, and was adopted by Darwin within his lifetime, btw.
You should not interpret that historical success to imply future success as it depended on non-sustainable groundwater extraction.
Eg, https://en.wikipedia.org/wiki/Ogallala_Aquifer
> Many farmers in the Texas High Plains, which rely particularly on groundwater, are now turning away from irrigated agriculture as pumping costs have risen and as they have become aware of the hazards of overpumping.
> Sixty years of intensive farming using huge center-pivot irrigators has emptied parts of the High Plains Aquifer.
> as the water consumption efficiency of the center-pivot irrigator improved over the years, farmers chose to plant more intensively, irrigate more land, and grow thirstier crops rather than reduce water consumption--an example of the Jevons Paradox in practice
How will the Great Plains farmers get water once the remaining groundwater is too expensive to extract?
Salt Lake City cannot simply build desalination plants to fix its water problem.
I expect the bad experiences of Okies during the internal migration of the Dust Bowl will be replicated once the temporary (albeit century-long) relief of using fossil water is exhausted.
Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.
The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
Rationalist community understands that very well. They even know how to put bounds on the unknowns and their own lack of information.
> The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
Right. Good thing they'd agree with you 100% on this.
No they don't. They think they can do this because they've accidentally reinvented the philosophy "logical positivism", which philosophers gave up on because it doesn't work. (This is similar to how they accidentally reinvented reconstructing arguments and called it "steelmanning".)
https://metarationality.com/probability-limitations
What's the probability of AI singularity? It has never happened before so you have no priors and any number you assign will be pure speculation.
Most of the time we make predictions based on how similar events happened in the past. For completely novel situations it's close to impossible to make a prediction and reckless to base policy on such a prediction.
But it was necessary at the beginning of flight and the flight to the moon would've never been possible either without a few talented people being able to make predictions about scenarios they knew little about.
There are just way too many people around nowadays, which is why most of us never get confronted with such novel topics and consequently we don't know how to reason about it
> Same is true about anything you're trying to forecast, by definition of it being in the future
There might be some flaws in this line of reasoning...
>And yet people have figured out how to make predictions more narrow than shrugging
And?
There are others, such as the unproven, narcissistic and frankly unlikely-to-be-true assumption that humanity continuing to exist is a net positive in the long run.
They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.
Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.
You don't have to agree with any of this. I am not defending every idea the author has. But I recommend that book.
This is the logic of someone who has failed to comprehend the core ideas of Calculus 101. You cannot use intuitive reasoning when it comes to infinite sums of numbers with extremely large uncertainties. All that results is making a fool out of yourself.
It then moved into "you should work a soulless investment banking job so you can give more".
More recently it was "you should excise all expensive fun things from your life, and give 100% of your disposable income to a weird poly sex cult and/or their fraudulent paper hedge fund because they're smarter than you."
In any case, EA smells strongly of “the ends justify the means” which most popular moral philosophies reject with strong arguments. One which resonates with me is that there are no “ends.” The path itself is the goal.
This is a false statement. Our entire modern world is built on the basis of the ends justify the means. Every time money is spent on long term infrastructure vs giving poor kids food right now, every time a war is fought, every time a doctor triages injuries at a disaster.
Winning at things that align with your principle is a principle. If you don't care about principles, you don't care about what you're winning at, thereby making every victory hollow and meaningless. That is how you turn into a loser at everything you do.
Of course it sounds ridiculous when you spell it out this way.
Of course the way your comment is written makes criticism sound silly.
IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)
Even if an individual person chooses to direct all their donations to a single cause, there's no way to get everyone to donate to a single cause (nor is EA attempting to). Money gets spread around because people have different values.
It absolutely does take some money away from other causes, but only in the sense that all charities do: if you give a lot to one charity, you may have less money to give to others.
If you assume we eventually figure out long distance space travel and humanity spreads across the galaxy, there could in the future be quadrillions of people, growing at some kind of exponential rate. So accelerating the space race by even an hour is equivalent to bringing billions of new souls into existence.
Perhaps you're arguing as an illustration of the way this group of people think, in which case I understand your point.
It encodes a slight bias towards human existence being a positive thing for us humans, but I don't think it's the shakiest part of that reasoning.
I get the feeling these people often want to seem smarter than they are, regardless of how smart they are. And they want to get money to ostensibly "consider these issues", but really they want money for nothing.
If they wanted to do right by the future masses, they should be looking to the things that are affecting us right now. But they treat those issues as if they'll work out in the wash.
The current sums invested and donated in altruist causes are rounding errors themselves compared to GDPs of countries, so the revealed preferences of those investing and donating to altruist causes is to care about the future and the present also.
Are you saying that they should give a greater preference to help those who already exist rather than those who may exist in the future?
I see a lot of Peter Singer’s ideas in modern “effective” altruism, but I get the sense from your comment that you don’t think that they have good reasons for doing what they do, or that their reason leads them to support well-meaning but ineffective solutions. I am trying to understand your position without misrepresenting your point or goals. Are you naysaying or do you have an alternative?
https://en.wikipedia.org/wiki/Peter_Singer
If they wanted to help, they should be focused on the now. Global poverty, climate change, despotic world leaders. They should be aligning themselves against such things.
But instead what we see is essentially not that. Effective altruism is a lot like the Democratic People's Republic of Korea, a bit of a misnomer.
A lot of them argue that poor countries essentially don't matter. Climate change is not an extinction event and there should an authoritarian world government to prevent nuclear conflict to minimize the risk of nuclear extinction.
>In his dissertation On the Overwhelming Importance of Shaping the Far Future (2013), supposedly “one of the best texts on existential risks,”[9] Nicholas Beckstead meditates on the “ripple effects” a human life might have for future generations and concludes “that saving a life in a rich country is substantially more important than saving a life in a poor country” due to the higher level of innovation and economic productivity attained in these countries.[10]
https://umbau.hfg-karlsruhe.de/posts/philosophy-against-the-...
To be pedantic, DPRK is run via the will of the people to a degree comparable to any country. A bigger misnomer is the west calling liberal “democracy”, just democracy.
It really annoys me when people say that those religious cultists do that.
They derive their bullshit from faulty, poorly thought out premises.
If you fuck up the very firsts calculations of the algorithm, it doesn't matter how rigorous all the subsequent steps are. There results are going to be all wrong.
(1) The kind of Gatesian solutions they like to fund like mosquito nets are part of the problem, not part of the solution as I see it. If things are going to get better in Africa, it will be because Africans grow their economy and pay taxes and their governments can provide the services that they want. Expecting NGOs to do everything for them is the same kind of neoliberal thinking that has rotted state capacity in the core and set us up for a political crisis.
(2) It is one thing to do something wrong, realize it was a mistake, and then make amends. It's another thing to do plan to do something wrong and to try to offset it somehow. Many of the high paying jobs that EA wants young people to enter are "part of the problem" when it comes to declining stage capacity, legitimation crisis, and not dealing with immediate problems -- like the fact that one of these days there's going to be a heat wave that is a mass causality event.
Furthermore
(3) Time discounting is a central part of economic planning
https://en.wikipedia.org/wiki/Social_discount_rate
It is controversial as hell, but one of the many things the Soviet Union got wrong before the 1980s was planning with a discount rate of zero, which led to many economically and ecologically harmful projects. If you seriously think it should be zero you should also be considering whether anybody should work in the finance industry at all or if we should have dropped a hydrogen bomb on Exxon's headquarters yesterday. At some point speculations about the future are just speculation. When it comes to the nuclear waste issue, for instance, I don't think we have any idea what state people are going to be in 20,000 years. They might be really pissed that buried spent nuclear fuel some place they can't get at it. Even the plan to burn plutonium completely in fast breeder reactors has an air of unreality about it, even though it happens on a relatively short 1000 year timescale we can't be sure at all that anyone will be around to finish the job.
(4) If you are looking for low-probability events to worry about I think you could find a lot of them. If it was really a movement of free thinkers they'd be concerned about 4,000 horsemen of the apocalypse, not the 4 or so that they are allowed to talk about -- but talk about a bunch of people who'll cancel you if you "think different". Somehow climate change and legitimation crisis just get... ignored.
(5) Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion, not least "The Singularity" was talked about by Jesuit Priest Teilhard de Chardin long before sci-fi writer Vernor Vinge used it as the hinge of a mystery novel.
The difficulty is in deriving any useful utility function from prices (even via preferences :), and as you know, econs can't rid themselves of that particular intrusive thought
https://mitsloan.mit.edu/sites/default/files/inline-files/So...
E: know any econs taking Habermas seriously ? Not a rhetorical q:
http://ecoport.org/storedReference/558800.pdf
[1] Though you might come to the conclusion that greeder people should have the money because they like it more
(Aside from the semi-tragic one to consider additive dilogarithms..)
One actionable (utility agnostic) suggestion: study the measureable consequences of (quantifiable) policy on carbon pricing, because this is already quite close to the uncontroversial bits
Nuclear waste issues are 99.9% present-day political/ideological. Huge portions of the Earth are uninhabitable due to climate and/or geology. Lead, mercury, arsenic, and other naturally-occurring poisons contaminate large areas. Volcanoes spew CO2 and toxic gasses by the megaton.
Vs. when is the last time you heard someone get excited over toxic waste left behind by the Roman Empire?
They don't even do this.
If you're reasoning in purely logical and deductive way - it's blatantly obvious that living beings experience way more pain and suffering, than pleasure and joy. If you do the math, humanity getting wiped out in effect is the best thing that could happen.
Which is why accelerationism ignoring all the AGI risks is correct strategy presuming the AGI will either wipe us out (good outcome) or provide technologies that improve the human condition and reduce suffering (good outcome).
Logical and deductive reasoning based on completely baseless and obviously incorrect premises is flat out idiotic.
You can't deprive non-existent people out of anything.
And if you do, I hope you're ready for purely logical, deductive follow up - every droplet of sperm is sacred and should be used to impregnate.
Most of criticisms are just "But they think they are better than us !" and the rest is "But sometimes they are wrong !"
I don't know about the community and couldn't care less but their writings have brought me some almost life saving fresh air in how to think about the world. It is very sad to me to read so many falsely elaborate responses from supposedly intelligent people having their ego hurt but in the end it reminds me why I like rationalists and I don't like most people.
Being able to do that is pretty much "entry level cognition" for a lot of us. You should be doing that yourself and doing it all the time if you want to play with the big kids.
One of the things I really miss about the old nerds-only programmer's pit setup was the amount of room we had for instruction, especially regarding social issues. The scenes from the college department in Wargames were really on the nose, but highlight a form of education that was unavoidable if you couldn't just dip out of a conversation.
Garak: Especially when they're neither.
Extra points for that comment's author implying that people who don't like the wrong and smug movement are unintelligent and protecting their egos, thus personally proving its smugness
As for smugness, it is subjective. Are those people smug ? Or are they talking passionately about some issue with the confidence of someone who feel what they are talking about and are expecting for it to resonate ? It's the eye of the beholder I guess.
For example what you call my smugness is what I would a slightly depressed attitude fueled by the fact that it's sometimes hard to relate to other people feelings and behavior.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
There is something about a particular "narrowband" signaling approach, where a certain kind of purity is sought, with an expectation that, given enough explaining, you will finally get it, become enlightened, and convert to the ranks. A more "wideband" approach would at least admit observations like yours do exist and must be comprehensively addressed to the satisfaction of those who hold such beliefs vs to the satisfaction of those merely "stooping" to address them (again in the hopes they'll just see the light so everyone can get back to narrowband-ville).
edit: oh, also, I think that a good part of people's aversion to the rationalists is just a reaction to the narrowband quality itself, not to the content. People are well-aware of the sorts of things that narrowband self-justifying philosophies lead to, from countless examples, whether it's at the personal level (an unaccountable schoolteacher) or societal (a genocidal movement). We don't trust a group unless they specifically demonstrate non-narrowbandedness, which means being collectively willing to change their behavior in ways that don't make sense to them. Any movement that co-opts the idea of what is morally justifiable---who says that e.g. rationality is what produces truth and things that run counter to it do not---is inherently frightening.
Any group that focuses on their own goals of high paying jobs regardless of the morality of those jobs or how they contribute to the structural issues of society is not that good. Then donating money while otherwise being okay with the status quo —- not touching anything systemic in such an unjust world but supposedly focusing on morality is laughable.
Not only that, but this is exactly the kind of scenario where we should be giving those signals the most weight: The individual estimating whether to join up with a tribe. (As opposed to, say, bad feelings about doing calculus.)
Not only does it involve humans-predicting-humans (where we have a rather privileged set of tools) but there have been millions of years of selective pressure to be decent at it.
Maybe they want to do it in a way I’d consider just: By exercising their rights as individuals in their personal domains and effectively airing their arguments in the public sphere to win elections.
But my intuition is they think democracy and personal rights of the non-elect are part of the problem to rationalize around and over.
Would genuinely love to read some Rationalist discourse on this question.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
This statement of yours makes no sense.
EAs by definition are attempting to remove the innate bias that discounts people far away by instead saying all lives are of equal worth.
>turning them into a math problem that assigns no special responsibility to the people around you
All lives are equal isn't a math problem. "Fuck it blow up the foreigners to keep oil prices low" is a math problem, it is a calculus that the US government has spent decades performing. (One that assigns zero value to lives outside the US.)
If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
And since air travel is a thing, what the hell does "close to us" mean?
For that matter, from a purely selfish POV, helping lift other nations up to become fully advanced economies is hugely beneficial to me, and everyone on earth, in the long run. I'm damn thankful for all the aid my country gave to South Korea, the number of scientific advances that have come out of SK damn well paid for any tax dollars my grandparents paid on many orders of magnitude times over.
> It can be hard to say exactly why in words, but that doesn't make it less true.
This is the part where I shout racism.
Because history has shown it isn't about people being far or close in distance, but rather in how those people look.
Americans have shot down multiple social benefit programs because, and these are what people who voted against those programs directly said was their reasons "white people don't want black people getting the same help white people get."
Whites in America have voted, repeatedly, to keep themselves poor rather than lift themselves and black families out of poverty at the same time.
Of course Americans think helping people in Africa is "weird".
In college, I became a scale-dependent realist, which is to say, that I'm most confident of theories / knowledge in the 1-meter, 1-day, 1 m/s scales and increasingly skeptical of our understanding of things that are bigger/smaller, have longer/short timeframes, or faster velocities. Maybe there is a technical name for my position? But, it is mostly a skepticism about nearly unlimited extrapolation using brains that evolved under selection for reproduction at a certain scale. My position is not that we can't compute at different scales, but that we can't understand at other scales.
In practice, the rationalists appear to invert their confidence, with more confidence in quarks and light-years than daily experience.
Musing on the different failure-directions: Pretty much any terrible present thing against people can be rationalized by arguing that one gadzillion distant/future people are more important. That includes religious versions, where the stakes of the holy war may presented as all of future humanity being doomed to infinite torment. There are even some cults that pitch it retroactively: Offer to the priesthood to save all your ancestors who are in hell because of original sin.
The opposite would be to prioritize the near and immediate, culminating in a despotic god-king. This is somewhat more-familiar, we may have more cultural experience and moral tools for detection and prevention.
A check on either process would be that the denigrated real/nearby humans revolt. :p
I head not read any rationalist writing in a long time (and I didn't know about Scott's proximity), but the whole time I was reqding the article I was thinking the same thing you just wrote... "why are they afraid of AI, i.e. the ultimate rationalist taking over the world", maybe something deep inside of them has the same reaction to their own theories as you so eloquently put above.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!
[0]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
Similar claims can be made about any structure of humans that exhibits gestalt intelligence, e.g. nations, stock markets, etc.
This sort of reasoning sounds great from 1000 feet up, but the longer you do it the closer you get to "I need to kill nearly all current humans to eliminate genetic diseases and control global warming and institute an absolute global rationalist dictatorship to prevent wars or humanity is doomed over the long run".
Or you get people who are working in a near panic to bring about godlike AI because they think that once the AI singularity happens the new AI God will look back in history and kill anybody who didn't work their hardest to bring it into existence because they assume an infinite mind will contain infinite cruelty.
People benefit from a sense of a family, a sense of community. It helps us feel more secure, both personally and to our loved ones.
I think the more I view things through this lenses the downstream benefits I see.
I really like the depth of analysis in your comment, but I think there's one important element missing, which is that this is not an individual decision but a group heuristic to which individuals are then sensitized. Individuals don't typically go so far as to (consciously or unconsciously) extrapolate others' logic forward to decide that it's dangerous. Instead, people get creeped out when other people don't adhere to social patterns and principles that are normalized as safe in their culture, because the consequences are unknown and therefore potentially dangerous; or when they do adhere to patterns that are culturally believed to be dangerous. This can be used successfully to identify things that are really dangerous, but also has a high false positive rate (people with disabilities, gender identities, or physical characteristics that are not common or accepted within the beholder's culture can all trigger this, despite not posing any immediate/inherent threat) as well as a high false negative rate (many serial killers are noted to have been very charismatic, because they put effort into studying how to behave to not trigger this instinct). When we speak of something being normalized, we're talking about it becoming accepted by the mainstream so that it no longer triggers the ‘creepy’ response in the majority of individuals. As far as I can tell, the social conservative basically believes that the set of normalized things has been carefully evolved over many generations, and therefore should be maintained (or at least modified only very cautiously) even if we don't understand why they are as they are, while the social liberal believes that we the current generation are capable of making informed judgements about which things are and aren't harmful to a degree that we can (and therefore should) continuously iterate on that set to approach an ideal goal state in which it contains only things that are factually known to be harmful.
As an interesting aside, the ‘creepy’ emotion, (at least IIRC in women) is triggered not by obviously dangerous situations but by ambiguously dangerous situations, i.e. ones that don't obviously match the pattern of known safe or unsafe situations.
> Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors".
The problem with the ‘us before them’ approach is that if two neighbourhoods each prioritize their local neighbourhood over the remote neighbourhood and compete (or go to war) to better their own neighbourhood at the cost of the other, generally both neighbourhoods are left worse off than they started, at least in the short term: both groups trying to make locally optimal choices leads (without further constraints) to globally highly suboptimal outcomes. In recognition of this a lot of people, not just capital-R Rationalists, now believe that at least in the abstract we should really be trying to optimize for global outcomes.
Whether anybody realistically has the computational ability to do so effectively is a different question, of course. Certainly I personally think the future-discounting ‘bias’ is a heuristic used to acknowledge the inherent uncertainty of any future outcome we might be trying to assign moral weight to, and should be accorded some respect. Perhaps you can make the same argument for the locality bias, but I guess that Rationalists (generally) either believe that you can't, or at least have a moral duty to optimize for the largest scope your computational power allows.
(...that said, progressivism has largely failed in dispelling this delusion. It is far too easy to feel as though progressivism/liberalism exists to prop up power hierarchies and economic disparities because in many ways it does, or has been co-opted to do that. I think on net it does not, but it should be much more cut-and-dry than it is. For that to be the case progressivism would need to find a way to effectively turn on its parasites, that is, rent-extracting capitalism and status-extracting moral elitism).
re: the first part of your reply. I sorta agree but I do think people do more extrapolation than you're saying on their own. The extrapolation is largely based on pattern-matching to known things: we have a rich literature (in the news, in art, in personal experience and storytelling) of failure modes of societies, which includes all kinds of examples of people inventing new moral rationalizations for things and using them to disregard personal morality. I think when people are extrapolating rationalists' ideas to find things that creep them out, they're largely pattern-matching to arguments they've seen in other places. It's not just that they're unknowns. And those arguments are, well, real arguments that require addressing.
And yeah, there are plenty of examples of people being afraid of things that today we think they should not have been afraid of. I tend to think that that's just how things go: it is the arc of social progress to figure out how to change things from unknown+frightening to known+benign. I won't fault anyone for being afraid of something they don't understand, but I will fault them for not being open-minded about it or being unempathetic or being cruel or not giving people chances to prove themselves.
All of this is rendered much more opaque and confusing by the fact that everyone places way too much stock in words, though. (e.g. the OP I was replying to who was taking all these criticisms of the rationalists at face-value). IMO this is a major trend that fucks royally with our ability as a society to make moral progress: we have come to believe that words supplant emotional intuition in a way that wrecks out ability to actually understand what people are upset about (I like to blame this trend for much of the modern political polarization). A small example of this is a case that I think everyone has experienced, which is a person discounting their own sense of creepiness from somebody else because they can't come up with a good reason to explain it and it feels unfair to treat someone coldly on a hunch. That should never have been possible: everyone should be trusting their hunches.
(which may seem to conflict with my preceding paragraph... should you trust your hunches or give people the chance to prove themselves? well, it's complicated, but it also really depends on what the result is. Avoiding someone personally because they creep you out is always fine, but banning their way of life when it doesn't affect you at all or directly harm anyone is certainly not.)
One thing I'd like to add though is that I do think there is an additional piece being discarded irrationally. They tend to highly undervalue everything you're describing. Humans aren't Vulcans. By being so obsessed with the risks of paperclip-maximizing-robots they devalue the risks of humans being the irrational animals they are.
This is why many on the left criticize them for being right wing. Not because they are, well some might be, but because they are incredibly easy to distract from what is being communicated by focusing too much on what is being said. That might be a bad phrasing but what I mean is that when you look at this piece from last year about prison sentence length and crime rates by Scott Alexander[0] nothing he says is genuinely unreasonable. He's generally evaluating the data fairly and rationally. Some might disagree there but that's not my point. My point is that he's talking to a nonexistant group. The right largely believes that punishment is the point of prison. They might _say_ the goal is to reduce crime, but they are communicating based on a set of beliefs that strongly favors punitive measures for their own sake. This causes a piece like that to miss the forest through the trees and can be seen by those on the left as functionally right wing propaganda.
Most people are not rational. Maybe some day they will be but until then it is dangerous to assume and act as if they are. This makes me see the rationalists as actually rather irrational.
0: https://www.astralcodexten.com/p/prison-and-crime-much-more-...
The primary issues as others have noted is they focus on people going to the highest paying jobs without much care for morality of the jobs. Ergo they are fine being net negatives in terms of their work and philosophy.
All they do is donate money. Donations don’t fix society. Nothing changes structurally. No root problems are looked at.
They largely ignore capitalism’s faults or when I’ve seen them talk about, it’s done in a way of superficially decrying capitalist issues but then largely going along with them. Which ties into how they focus on high paying jobs regardless of morality (I’m exaggerating here but the overall point is correct).
—
HN is not intelligent when it comes to politics or the world. The avg person here is a western chauvinist with little political knowledge but a defensive ego about it. No need to be sad about this comment page.
Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.
For example the most recent issue of their newsletter: https://us8.campaign-archive.com/?e=7023019c13&u=52b028e7f79...
I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.
For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.
They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.
The newsletter is of course far more to the point than that, but even then you'll notice half of it is devoted to understanding the emotional state and intentions of LLMs...
It is of course entirely possible to identify as an "Effective Altruist" whilst making above-average donations to charities with rigorous efficacy metrics and otherwise being completely normal, but that's not the centre of EA debate or culture....
EAs gave $1,886,513,058 through GiveWell[1], and there is 0 AI stuff in there (you can search in the linked Airtable spreadsheet).
There is also a whole movement for doing a lifetime commitment to give 10% of your earnings to charity. 9,880 people took the pledge so far[2].
[1] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...
[2] https://www.givingwhatwecan.org/pledge
I also believe that idealistic people will go to great lengths to convince themselves that their desired outcome is, in fact, the moral one. It starts by saying things like, "Well, what is harm, actually..." and then constructing a definition that supports the conclusions they've already arrived at.
I'm quite sure Sam Bankman-Fried did not believe he was harming anybody when he lost/stole/defrauded his investors and depositors' money.
What is your alternative? What's your framework that makes you contribute to malaria prevention more or more effectively than EAs do? Or is the claim instead that people should shut down conversation within EA that strays from the EA mode?
How much more do they need to give before you will change your mind about whether “EA's actually want to do something about malaria”?
[1] https://www.givewell.org/all-grants-fund
[2] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...
I am plenty happy to simp for the Gates foundation, but I think it's important to acknowledge that becoming Bill Gates to support charity is not a strategy the average person can replicate. The question for me is how do I live my life to support the causes I care about, not who lives more impressive lives than me.
If you exclude "nations" then it does look to be the Church: "The Church operates more than 140,000 schools, 10,000 orphanages, 5,000 hospitals and some 16,000 other health clinics". Caritas, the relevant charitable umbrella organization, gives $2-4b per year on its own, and that's not including the many, many operations run by religious orders not under that umbrella, or by the hundreds of thousands of parishes around the world (most of which operate charitable operations of their own).
And yet, rationalists are totally happy criticizing the Catholic Church -- not that I'm complaining, but it seems a bit hypocritical.
Similarly, it's not like government funding is an overlooked part of EA. Working on government and government aid programs is something EA talks about, high leverage areas like policy especially. If there's a more standard government role that an individual can take that has better outcomes than what EAs do, that would be an important update and I'd be interested in hearing it. But the criticism that EA is just not large enough is hard to action on, and more of a work in progress than a moral failing.
There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation).
“[...] Clevinger was one of those people with lots of intelligence and no brains, and everyone knew it except those who soon found it out. In short, he was a dope." - Joseph Heller, Catch-22 https://www.goodreads.com/quotes/7522733-in-short-clevinger-...
HN should be better than this.
Can people suffer from that impairment? Is that possible? If not, please explain how wrong assumptions can be eliminated without actively looking for them. If the impairment is real, what would you call its victims? Pick your own terminology.
Calling someone a dumbass in this situation is a kindness, because the assumption is that they're capable of not being one with a little self-reflection.
First they laugh...
I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
In that case I don't think you've met any of the people under discussion.
I have read Effective Altruists like that. But I also remember seeing a lot of money donated to a bunch of really decent sounding causes because someone spent 5 minutes asking themselves what they wanted their donation to maximise, decided on "Lives saved" and figured out who is doing the best at that.
Honestly thought they were the same people
Isn't there a lot of overlap between the two groups?
I recently read a great book that examines these various groups and their commonality: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker. Highly recommended.
For anyone else reading, a good example of what EAs do can be seen with the GiveWell charity: https://www.givewell.org/top-charities-fund
Lots of anti-malaria and vitamin stuff (as a cheap way to save lots of lives). There are also tons of EA animal charities too, such as Humane League: https://thehumaneleague.org/our-impact
If they want to donate to charity, they can just donate. You don't gotta make a religion out of it.
I also think the ambiguity of meaning in natural language is why statistical llms are so popular with this crowd. You don't need to think about meaning and parsing. Whatever the llm assumes is the meaning is whatever the meaning is.
Logic requires properties of metaphysical objectivity.
If you use the true meaning of words it would be called irrationality, delusion, sophism, or fallacy when such things are claimed true when in fact they are false.
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
The problem is less clear in philosophy than mathematics, but it's still there. It's really easy on your own terms to come up with some idea that the collective intelligence has already revealed to be fatally flawed in some undeniable manner, or at the very least, has very powerful arguments against it that an individual may never consider. The ideas that have survived decades, centuries, and even millenia against the collective weight of humanity assaulting them are going to have a certain character that "something someone came up with last week" will lack.
(That said I am quite heterodox in one way, which is that I'm not a big believer in reading primary sources, at least routinely. Personally I think that a lot of the primary sources noticeably lack the refinement and polish added as humanity chews it over and processes it and I prefer mostly pulling from the result of the process, and not from the one person who happened to introduce a particular idea. Such a source may be interesting for other reasons, but not in my opinion for philosophy.)
I'm not sure if this counterpoint generalizes entirely to the original critique, since certainly LessWrongers aren't usually posting about or discussing math as if they've discovered it-- usually substantially more niche topics.
Or because western culture reflects this theme continuously through all the culture and media you've immersed in since you were a child?
Also the idea is definitely not new to Descartes, you can find echoes of it going back to Plato, so your idea isn't wrong per se. But I think it underrates the effect to which our philosophical preconceptions are culturally constructed.
I come from a physics background. We used to (and still) have a ton of physicists who decide to dable in a new field, secure in their knowledge that they are smarter than the people doing it, and that anything worthwhile that has already been thought of they can just rederive ad hoc when needed (economists are the only other group that seems to have this tendency...) [1]. It turned out every time that the people who had spent decades working on, studying, discussing and debating the field in question had actually figured important shit out along the way. They might not have come with the mathematical toolbox that physicists had, and outside perspectives that challenge established thinking to prove itself again can be valuable, but when your goal is to actually understand what's happening in the real world, you can't ignore what's been done.
[1] There even is an xkcd about this:
https://xkcd.com/793/
Suppose a foot race. Choose two runners of equal aptitude and finite existence. Start one at mile 1 and one at mile 100. Who do you think will get farther?
Not to mention, engaging in human community and discourse is a big part of what it means to be human. Knowledge isn't personal or isolated, we build it together. The "first principles people" understand this to the extent that they have even built their own community of like minded explorers, problem is, a big part of this bond is their choice to be willfully ignorant of large swaths of human intellectual development. Not only is this stupid, it also is a great disservice to your forebears, who worked just as hard to come to their conclusions and who have been building up the edifice of science bit by bit. It's completely antithetical to the spirit of scientific endeavor.
This is a feature, not a bug, for writers who hold an opinion on something and want to rationalize it.
So many of the rationalist posts I've read through the years come from someone who has an opinion or gut feeling about something, but they want it to be seen as something more rigorous. The "first principles" writing style is a license to throw out the existing research on the topic, including contradictory evidence, and construct an all new scaffold around their opinion that makes it look more valid.
I use the "SlimeTimeMoldTime - A Chemical Hunger" blog series as an example because it was so widely shared and endorsed in the rationalist community: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-p... It even received a financial grant from Scott Alexander of Astral Codex Ten
Actual experts were discrediting the series from the first blog post and explaining all of the author's errors, but the community soldiered on with it anyway, eventually making the belief that lithium in the water supply was causing the obesity epidemic into a meme within the rationalist community. There's no evidence supporting this and countless take-downs of how the author misinterpreted or cherry-picked data, but because it was written with the rationalist style and given the implicit blessing of a rationalist figurehead it was adopted as ground truth by many for years. People have been waking up to issues with the series for a while now, but at the time it was remarkable how quickly the idea spread as if it was a true, novel discovery.
I think that SlimeMoldTimeMold's rise and fall was actually a pretty big point in favor of the "rationalist community".
That feels like revisionist history to me. It rose to fame in LessWrong and SlateStarCodex, was promoted by Yudkowski, and proliferated for about a year and half before the takedowns finally got traction.
While it was the topic du jour in the rationalist spaces it was very difficult to argue against. I vividly remember how hard it was to convince anyone that SMTM wasn't a good source at the time, because so many people saw Yudkowski endorse it, saw Scott Alexander give it a shout out, and so on.
Now Yudkowski has gone back and edited his old endorsement, it has disappeared from the discourse, and many want to pretend the whole episode never happened.
> (I don't remember any detailed takedowns of SlimeMoldTimeMold coming before that article, but maybe there are).
Exactly my point. It was criticized widely outside of the rationalist community, but the takedowns were all dismissed because they weren't properly rationalist-coded. It finally took someone writing it up in the form of rationalist rhetoric and seeding it into LessWrong to break the spell.
This is the trend with rationalist-centric contrarianism: You have to code your articles with the correct prose, structure, and signs to get uptake in the rationalist community. Once you see it, it's hard to miss.
Do you have any examples of this that predate that LW article? Ideally both the critique and its dismissal but just the critique would be great. The original HN submission had a few comments critiquing it but I didn't see anything in depth (or for that matter as strident).
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
https://www.smbc-comics.com/?id=2556
> Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
There's a lot of in-group signaling in rationalist circles like the "epistemic status" taglines, posting predictions, and putting your humility on show.
This has come full-circle, though, and now rationalist writings are generally pre-baked with hedging, both-sides takes, escape hatches, and other writing tricks that make it easier to claim they weren't entirely wrong in the future.
A perfect exaple is the recent "AI 2027" doomsday scenario that predicts a rapid escalation of AI superpowers followed by disaster in only a couple years: https://ai-2027.com/
If you read the backstory and supporting blog posts from the authors they are filled to the brim with hedges and escape hatches. Scott Alexander wrote that it was something like "the 80th percentile of their fast scenario", which means when it fails to come true he can simple say it wasn't actually his median prediction anyway and that they were writing about the fast scenario. I can already predict that the "We were wrong" article will be more about what they got right with a heavy emphasis on the fact that it wasn't their real median prediction anyway.
I think this group relies heavily on the faux-humility and hedging because they've recognized how powerful it is to get people to trust them. Even the comment above is implying that because they say and do these things, they must be immune from the criticism delivered above. That's exactly why they wrap their posts in these signals, before going on to do whatever they were going to do anyway.
If you want to say their humility is not genuine, fine. I'm not sure I agree with it, but you are entitled to that view. But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
That's my point: Their rhetorical style is interpreted by the in-group as a sort of weird infallibility. Like they've covered both sides and therefore the work is technically correct in all cases. Once they go through the hedging dance, they can put forth the opinion-based point they're trying to make in a very persuasive way, falling back to the hedging in the future if it turns out to be completely wrong.
The writing style looks different depending on where you stand: Reading it in the forward direction makes it feel like the main point is very likely. Reading it in the backward direction you notice the hedging and decide they were also correct. Yet at the time, the rationalist community attaches themselves to the position being pushed.
> But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
edit: my apologies, that was someone else in the thread. I do feel like between the two comments though there is a "damned if you do, damned if you don't". (The original quote above I found absurd upon reading it.)
“Damned if you do” indeed…
In most writing, people write less persuasively on topics they have less conviction in.
Ok, let's scroll up the thread. When I refer to "the specific criticism that I quoted", and when you say "implying that because they say and do these things, they must be immune from the criticism delivered above": what do you think was the "criticism delivered above"? Because I thought we were talking about contrarian1234's claim to exactly this "strawman", and you so far have not appeared to not agree with me that this criticism was invalid.
My point wasn't to nit-pick individual predictions, it was a general explanation of how the game is played.
Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
- He predicted at least $250 million in damages from Black Lives Matter protests.
- He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
- He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
This is just random samples from the first blog post that popped in Google: https://www.astralcodexten.com/p/grading-my-2021-predictions
It's also noteworthy to read that a lot of his predictions are about his personal life, his own blogging actions, or [redacted] things. These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
> He predicted at least $250 million in damages from Black Lives Matter protests.
He says
> 5. At least $250 million in damage from BLM protests this year: 30%
which, by my reading means he assigns it greater-than-even odds that _less_ than $250 million dollars in damages happened (I have no understanding of whether or not this result is the case, but my reading of your post suggests that you believe that this was indeed the outcome).
You say > He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
while he says > Vitamin D is _not_ generally recognized (eg NICE, UpToDate) as effective COVID treatment: 70% (emphasis mine)
(I feel like you're probably getting upvotes from people who feel similarly, but sometimes I feel like nobody ever writes "I agree with you" comments, so the impression is that there's only disagreement with some point being made.)
Then you start encountering the weirder parts. For me, it was the group think and hero worship. I just wanted to read interesting takes on new topics, but if you deviated from the popular narrative associated with the heroes (Scott Alexander, Yudkowski, Cowen, Aaronson, etc.) it felt like the community's immune system identified you as an intruder and started attacking.
I think a lot of people get drawn into the idea of it being a community where they finally belong. Especially on Twitter (where the latest iteration is "TPOT") it's extraordinarily clique-ish and defensive. It feels like high school level social dynamics at play, except the players are equipped with deep reserves of rhetoric and seemingly endless free time to dunk on people and send their followers after people who disagree. It's a very weird contrast to the ideals claimed by the community.
Since when is that what we do here? If he'd written that he'd decided to become vegetarian, would we all be out here talking about how vegetarians are so annoying and one of them even spat on my hamburger one time?
And then of these uncalled-for takedowns, several -- including yours -- don't even seem to be engaging in good-faith discourse, and seem happy to pile on to attacks even when they're completely at odds with their own arguments.
I'm sorry to say it but the one who decided to use their free time to leer at people un-provoked over the internet seems to be you.
How was it condescending or lecturing?
> You could simply ask "Can you provide examples" instead of the "If you ____ then I suggest ____" form.
Why is that not equally condescending or lecturing?
I genuinely don't understand how you can point to someone's calibration curve where they've broadly done well, and cherry pick the failed predictions they made, and use this not just to claim that they're making bad predictions but that they're slimy about admitting error. What more could you possibly want from someone than a tally of their prediction record graded against the probability they explicitly assigned to it?
One man's modus ponens, as it goes.
You seem to be trying to insinuate that Alexander et. al. are pretending to know how things will turn out and then hiding behind probabilities when they don't turn out that way. This is missing the point completely. The point is that when Alexander assigns an 80% probability to many different outcomes, about 80% of them should occur, and it should not be clear to anyone (including Alexander) ahead of time which 80%.
> He predicted at least $250 million in damages from Black Lives Matter protests.
Many sources estimated damages at $2 billion or more (see https://www.usatoday.com/story/news/factcheck/2022/02/22/fac... and links from there), so this did in fact come true.
Edit: I see that the prediction relates to 2021 specificially. In the wake of 2020, I think it was perfectly reasonable to make such a prediction at that confidence level, even if it didn't actually turn out that way.
> He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
> He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
If you make many predictions at 70-80% confidence, as he does, you should expect 20-30% of them not to come true. It would in fact be a failure (underconfidence) if they all came true. You are in fact citing a blog post that is exactly about a self-assessment of those confidence levels.
Also, he gave a 70% chance to Vitamin D not being generally recognized as a good COVID treatment.
> These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
The point is not "overall accuracy", but overall calibration - i.e., whether his assigned probabilities end up making sense and being statistically validated.
You have done nothing to establish that any correlation between the category of prediction and his accuracy on them.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I have no belief any particular individual can do anything about shrimp welfare more than they can about the intractable problems we do face.
I think its a result of its complete denial of and ignorance of politics. Because rationalist and effective altruist movements make a whole lot more sense, if you realize they are talking about deeply social and political issues with all politics removed from it. Its technocrat-ism the poster child of the kind of "there is no alternative" neoliberalism that everyone in the western world was indoctrinated into since the 80s.
Its a fundamental contradiction, we don't need to talk about politics because we already know liberal democracies and free-market capitalism is the best we ever going to achieve, faced with the numerous intractable problems we face that can not possibly be related to liberal democracies and free-market capitalism.
The problem is: How do we talk about any issue the world is facing today without ever challenging or even talking about any of the many assumptions the western liberal democracies are based upon? In other words: the problems we face are structural/systemic, but we are not allowed to talk about the structures/systems. That's how you end up with space flight and shrimp welfare and AGI/ASI catastrophizing taking up 99% of everything these people talk about. It's infantile, impotent liberal escapism more than anything else.
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
As far as I know it's never hosted an impress-the-oligarch fundraiser, which as you say would at least have a logic behind it[1] even if it might seem distasteful.
For a philosophy which started out from the point of view that much of mainstream aid was spent with little thought, it was a bit of an end of Animal Farm moment.
(to their credit, a lot of people who identified as EAs were unhappy. If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
[1]a pretty shaky one considering how easy it is to impress American billionaires with Oxford architecture without going to the expense of operating a nearby mansion as a venue, particularly if you happen to be a charitable movement with strong links to the university... [2]obviously people are only objecting to it for PR purposes because they're not smart enough to realise that capital appreciates and that venues cost money, and definitely not because they've got a pretty good idea how expensive upkeep on little used medieval venues are and how many alternatives exist if you really care about the cost effectiveness of your retreat, especially to charitable movements affiliated with a university...
I’m a bit confused by this one.
Are you saying that no-one who identifies as rationalist sneered at the objections? Because I don’t think that’s true.
>As far as I know it's never hosted an impress-the-oligarch fundraiser
As far as I know, they only hosted 3 events there before deciding to sell, so this is low-information.
"Aren't these the people who"...
> And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
What's the value of that if it doesn't appear to be reasonably put to their own ideas. What you described otherwise is just another form of the exact kind of self-congratulation often (reasonably, IMO) lobbed at these "people"
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
I suspect you'll see a few people "win" or strike it rich with AI, the vast majority will simply be left with a big bill.
It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
Perhaps American AI efforts will one day be viewed similarly. “Yeah, they had an early rush, lots of innovation, high valuations, and robber barons competing. Today it’s just stale old infra despite the high-energy start.”
Humans are also freight, of course. It is not like the rail companies really care about what kind of fright is on the trains, so long as it is what the customer considers most important (read: most profitable). Humans are deprioritized exactly because they aren't considered important by the customer, which is to say that the customer, who is also the freight in this case, doesn't really want to be on a train in the first place. The customer would absolutely ensure priority (read: pay more, making it known that they are priority) if they wanted to be there.
I understand the train geeks on the internet find it hard to believe that not everyone loves trains like they do, but the harsh reality is that the average American Joe prefers other means of transportation. Should that change in the future, the rail network will quickly accommodate. It has before!
For what it's worth, I like traveling by train and do so whenever I can, but I'm an outlier. Most Americans look at the travel times and laugh at the premise of choosing a train over a plane. And when I say they look at the travel times, I don't mean they actually bother to look up train routes. They just know that airplanes are several times faster. Delays suffered by trains never get factored into the decision because trains aren't taken seriously in the first place.
https://www.worldatlas.com/articles/highest-railway-cargo-tr...
You are comparing USA today to the robber baron phase, whose to say China isn't in the same phase? Lots of money being thrown at new railroads and you have Chinese leaders and best and management leaders chasing that money. When happens when it goes low budget/maintenance mode?
Nonsense. The US has the largest freight rail system in the world, and is considered to have the most efficient rail system in the world to go along with it.
There isn't much in the way of passenger service, granted, but that's because people in the US aren't, well, poor. They can afford better transportation options.
> It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
It initially built out the passenger rail just fine, but then evolution saw better options come along. Passenger rail disappeared because it no longer served a purpose. It is not like, say, Japan where the median household income is approaching half that of Mississippi and they hold on to rail because that's what is affordable.
1. Freight is easier to manage and has better economics on a dedicated network. The US freight network is extremely efficient as others have pointed out. Other networks, e.g., Germany, instead prioritized passenger service. In Germany rail moves a small proportion of freight (19%) compared to trucks. [0] It's really noticeable on the Autobahn and unlike the US where a lot of truck traffic is intermodal loads.
2. The US could have better rail service by investing in passenger networks. Instead we have boondoggles like the California high-speed rail project which has already burned through 10s of billions of dollars with no end in sight. [1] Or the New Jersey Transit system which I had the pleasure to ride on earlier today to Newark Airport. It has pretty good coverage but needs investment.
[0] https://dhl-freight-connections.com/en/trends/global-freight...
[1] https://en.wikipedia.org/wiki/California_High-Speed_Rail
How so?
> The US freight network is extremely efficient as others have pointed out.
'Others' being literally the comment you replied to.
> The US could have better rail service by investing in passenger networks.
Everything there is can be improved, of course, but to what significance here?
This is so misguided view... Trains (when done right) aren't "for the poor", they are great transportation option, that beats both airplanes and cars. In Poland, which isn't even close to the best, you can travel between big cities with speeds above 200km/h, and you can use regional rail for your daily commute, both those options being very comfortable and convenient, much more convenient than traveling by car.
What gives you the idea that rail would be preferable to flying for the NYC to LAS route if only it existed? Even as the crow flies it is approximately 4,000 km, meaning that at 200 km/h you are still looking at around 20 hours of travel in an ideal case. Instead of just 5 hours by plane. If you're poor an additional 15 hours wasted might not mean much, but when time is valuable?
Why would you constrain the route to within a specific state? In fact, right now a high-speed rail line is being planned between Las Vegas and LA.
But outside of Nevada, there are many equivalent distance routes in the US between major population centers, including:
Chicago/Detroit
Dallas/Houston
LA/SF
Atlanta/Charlotte
Right now and since 1979!
I'll grant you that people love to plan, but it turns out that they don't love putting on their boots and picking up a shovel nearly as much.
> But outside of Nevada, there are many equivalent distance routes in the US between major population centers, including
And there is nothing stopping those lines from being built other than the lack of will to do it. As before, the will doesn't exist because better options exist.
There is no magic in this world like you seem to want to pretended. All of those things simply boil down to people. Property rights only exist because people say they do, environmental reviews only exist because people say they do, skilled workers are, well, literally people, and the necessary capital is already created. If the capital is being directed to other purposes, it is only because people decided those purposes are more important. All of this can change if the people want it to.
> HN users sometimes have this weird fantasy that with enough political will it's possible to make enormous changes but that's simply not how things operate in a republic with a dual sovereignty system.
Hell, the republic and dual sovereignty system itself only exists because that's what people have decided upon. Believe it or not, it wasn't enacted by some mythical genie in the sky. The people can change it all on a whim if the will is there.
The will isn't there of course, as there is no reason for the will to be there given that there are better options anyway, but if the will was there it'd be done already (like it already is in a few corners of the country where the will was present).
There has been continuous regularly scheduled passenger service between Chicago and Detroit since before the Civil War. The current Amtrak Wolverine runs 110 MPH (180 KPH) for 90% of the route, using essentially the same trainset that Brightline plans to use.
They’ve made a lot of investments since the 1990s. It’s much improved, though perhaps not as nice as during the golden years when it was a big part of the New York Central system (from the 1890s to the 1960s they had daily trains that went Boston/NYC/Buffalo/Detroit/Chicago through Canada from Niagara Falls to Windsor).
During the first Trump administration, Amtrak announced a route that would go Chicago/Detroit/Toronto/Montreal/Quebec City using that same rail tunnel underneath the Detroit River. It was supposed to start by 2030. We’ll see if it happens.
I've taken a Chinese train from Zhengzhou, in central China, to Shenzhen, and it was fantastic. Cheap, smooth, fast, lots of legroom, easy to get on and off or walk around to the dining car. And, there's a thing where boiling hot water is available, so everyone brings instant noodle packs of every variety to eat on the train.
Can't even imagine what the US would be like if we had that kind of thing.
Getting to the airport in most major cities takes an hour, and then there's the whole pre-flight security theatre, and the flights themselves are rarely pleasant. To add insult to injury, in the US it's usually a $50 cab ride to the airport and there are $28 ham-and-cheese sandwiches in the terminal if you get hungry.
In China and Japan the trains are centrally located, getting on takes ten minutes, and the rides are extremely comfortable. If such a thing existed in the US I think it would be extremely popular. Even if it was just SF-LA-Vegas.
Anyway, New York to Las Vegas spans most of the US. There are plenty of routes in the US where rail would make sense. Between Boston, New Haven, New York City, Philadelphia, Baltimore, and Washington, D.C. Which has the Amtrak Acela. Or perhaps Miami to Orlando. Which has a privately funded high speed rail connection called Brightline that runs at 200 km/h who's ridership was triple what had been expected at launch.
I am, thankfully, not.
> Which has a privately funded high speed rail connection called Brightline that runs at 200 km/h
Which proves that when the will is there, it will be done. The only impediment in other places is simply the people not wanting it. If they wanted it, it would already be there.
The US has been here before. It built out a pretty good, even great, passenger rail network a couple of centuries ago when the people wanted it. It eventually died out simply because the people didn't want it anymore.
If they want it again in the future, it will return. But as for the moment...
Since nobody really wants passenger rail in the US, they don't put in the effort to see that it exists (outside of some particular routes where they do want it). In many other countries, people do want board access to passenger rail (because that's all they can afford), so they put in the effort to have it.
~200 years ago the US did want passenger rail, they put in the work to realize it, and it did have a pretty good passenger rail network at the time given the period. But, again, better technology came along, so people stopped maintaining/improving what was there. They could do it again if they wanted to... But they don't.
The problem is the railroads were purchased by the winners. Who turned out to be the existing winners. Who then went on to continue to win.
On the one hand, I guess that's just life here in reality.
On the other, man, reality sucks sometimes.
Imagine if they were bought by losers.
It's not social media. It's a model the capitalists train and own. Best the rest of us will have access to are open source ones. It's like the difference between trying to go into court backed by google searches as opposed to Lexis/Nexis. You're gonna have a bad day with the judge.
Here's hoping the open source stuff gets trained on quality data rather than reddit and 4chan. Given how the courts are leaning on copyright, and lack of vetted data outside copyright holder remit, I'm not sanguine about the chances of parity long term.
Anything that can't be self-improving or superhuman almost certainly isn't worthy of the moniker "AI". A true AI will be born into a world that has already unlocked the principles of intelligence. Humans in that world would be capable themselves of improving AI (slowly), but the AI itself will (presumably) run on silicon and be a quick thinker. It will be able to self-improve, rapidly at first, and then more rapidly as its increased intelligence allows for even quicker rates of improvement. And if not superhuman initially, it would soon become so.
We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
I think that the OP's point was that it doesn't matter whether it's "real AI" or not. Even if it's just a glorified auto-correct system, it's one that has the clear potential to overturn our information/communication systems and our assumptions about individuals' economic value.
That's going to be a swift kick to your economy, no matter how strong.
Have you ever read Scott Alexander's blog (Slate Star Codex, now Astral Codex X)? It's full of doubt and self-questioning. The guy even keeps a public list of his mistakes:
https://www.astralcodexten.com/p/mistakes
I'll admit my only touchpoint to the "rationalist community" is this blog, but I sure don't get "full of themselves" from that. Quite the contrary.
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
It frequently reduces complex problems into comfortable oversimplifications.
Maybe you don't think that is real wisdom, and maybe that's sort of your point, but then what does real wisdom look like? Should wisdom make you considerate of the multiple contexts it does and doesn't affect? Maybe the issue is we need to better understand how to evaluate and use wisdom. People who truly understand a piece of wisdom should communicate deeply rather than parroting platitudes.
Also to be frank, wisdom is a way of controlling how others perceive a problem, and is a great way to manipulate others by propping up ultimatums or forcing scope. Much of past wisdom is unhelpful or highly irrelevant to modern life.
e.g. "Good things come to those who wait."
Passive waiting rarely produces results. Initiative, timing, and strategic action tend to matter more than patience.
but I don't know enough about it, I'm just trolling.
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
Thanks, I might actually go do this :) I recently got exposed to a very persuasive form of "rationalism is a social construct" by reading "Alchemy" by Rory Sutherland. But a theme in these comments is that a lot of these ideas are just recycled from philosophers and that the philosophers were less likely to try and induct you into a cult.
Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.
Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.
This warning isn't new, and it's very mainstream. https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
Examples that come to mind: statistical modelling (reduction to nonparametric models), protein folding (reduction to quantum chemistry), climate/weather prediction (reduction to fluid physics), human language translation (reduction to neural networks).
Reductionism is not that useful as a theory building tool, but reductionist approaches have a lot of practical value.
I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.
Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.
If you're looking for insults, and declaring the whole conversation a "culture war" as soon as you think you found one, (a) you'll avoid plenty of assholes, but (b) in the end you will read whatever you want to read, not what the thoughtful people are actually writing.
The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.
Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.
Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.
Actually, neither do Rationalists, but instead they cosplay at being rational.
What do you mean? The biologists I've had the privilege of working with absolutely do try to. Obviously some work at a higher level of abstraction than others, but I've not met any who apply any magical thinking to the actual biological investigation. In particular (at least in my milieu), I have found that the typical biologist is more likely to consider quantum effects than the typical physicist. On the other hand (again, from my limited experience), biologists do tend to have some magical thinking about how statistics (and particularly hypothesis testing) works, but no one is perfect.
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.
I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)
I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.
Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.
Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
Is it really a rationality when folks are sort of out of touch with reality, replacing it with models that lack life's endless nuances, exceptions and gotchas? Being principled is a good thing, but if I correctly understand what you're talking about - surely ignoring something just because it doesn't fit some arbitrarily selected set of principles is different.
I'm no rationalist (I don't have any meaningful self-identification, although I like the idea of approaching things logically) but I've had enough episodes of being guilty of something like this - having an opinion on something, lacking the depth, but pretending it's fine because my simple mental model is based on some ideas I like and can bring order to the chaos. So maybe it's not rationalism at all, but something else masquerading as it, like probably being afraid of mismatching the expectations?
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
If you want to see how human and tribal rationalists are, go criticize the movement as an outsider. Or try to write a mildly critical NYT piece about them and watch how they react.
>The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right
“Guy Who Is Always Right” as a role in a social group is a terrible target, yet it somehow seems like what rationalists are aiming for every time I read any of their blog posts
It reminds me Kier Starmers Labour, calling themselves "the adults in the room".
Its a cheap framing trick, belying an emptiness on the people using it.
When ones find themself mentioning Aella as one of the members taking their movement "in new directions," then they should stop and ask whether they are the insightful well rounded person with much to say about all sorts of things, or whether they are just a very gifted computer scientist who is still not well rounded enough to recognize a legitimate dimwit like Aella when they see one.
And in general, I do feel like they suffer from "I am a genius at X, so my take on Y should be given special consideration." If you're in a group where everyone's talking about physics and almost none of them are physicists, then run. I'm still surprised at how little consideration these people give philosophy and the centuries of its written thought. Some engineers spend a decade or more building up math and science skills to the point that they can be effective practitioners, but then they think they can hop right into philosophical discussions with no background. Then when they try to analyze a problem philosophically, their brief (or no) experience means that they reason themselves into dead-end positions like philosophical skepticism that were tackled in a variety of ways over the past centuries.
Religions: "Catholic" actually means "universal" (implication: all the real Christians are among our number). "Orthodox" means "teaching the right things" (implication: anyone who isn't one of us is wrong). "Sunni" means "following the correct tradition" (implication: anyone who isn't one of us is wrong").
Political parties: "Democratic Party" (anyone who doesn't belong doesn't like democracy). "Republican Party" (anyone who doesn't belong wants kings back). "Liberal Party" (anyone else is against freedom).
In the world of software, there's "Agile" (everyone else is sluggish and clumsy). "Free software" (as with the liberals: everything else is opposed to freedom). People who like static typing systems tend to call them "strong" (everyone else is weak). People who like the other sort tend to call them "dynamic" (everyone else is rigid and inflexible).
I hate it too, but it's so very very common that I really hope it isn't right to say that everyone who does it is empty-headed or empty-hearted.
The charitable way to look at it: often these movements-and-names come about when some group of people picks a thing they particularly care about, tries extra-hard to do that thing, and uses the thing's name as a label. The "Rationalists" are called that because the particular thing they chose to focus on was rationality; maybe they do it well, maybe not, but it's not so much "no one else is rational" as "we are trying really hard to be as rational as we can".
(Not always. The term "Catholic" really was a power-grab: "we are the universal church, those other guys are schismatic heretics". In a different direction: the other philosophical group called "Rationalists" weren't saying "we think rationality is really important", they were saying "knowledge comes from first-principles reasoning" as opposed to the "Empiricists" who said "knowledge comes from sense experience". Today's "Rationalists" are actually more Empiricist than Rationalist in that sense, as it happens.)
The Catholic Church follows the Melchisedec order (Heb v. ; vi. ; vii). The term Catholic (καθολικη) was used as early as the first century; it is an adjective which describes Christianity.
The oldest record that we have to this day is the Epistle of Ignatius to the Smyrnaeans Chapter 8 where St. Ignatius writes "ωσπερ οπου αν η Χριστος Ιησους, εκει η καθολικη εκκλησια". (just as where Jesus Christ is, there is the Catholic Church.):
https://greekdoc.com/DOCUMENTS/early/i-smyrnaeans.html
The protestors in the 16th c. called themselves Protestants, so that's what everyone calls them. English heretic-schismatics didn't want to share the opprobrium so they called themselves English, hence Anglican. In USA they weren't governed congregationally like the Congregationalists, or by presbyters like the Presbyterians, but by bishops, so they called themselves Bishop-ruled, or Episcopalians. (In fact, Katharine Jefferts-Schori changed the name of the denomination from The Protestant Episcopal Church to The Episcopal Church recently.)
The orthodox catholics called themselves Orthodox to distance themselves from the unorthodox of which there were plenty, spawning themselves off in the wake of practically every ecumenical council.
Lutherans in the USA name themselves after Father Martin Luther, some Augustinian priest from Saxony who protested against the Church's hypocritical corruption at the time, and the controversy eventually got out of hand and precipitated a schism/heretical revolution, back in the 1500s, but Lutherans back in Germany and Scandinavia call themselves Gospel churches, hence Evangelical. Some USA denominations that go back to Germany and who came over to USA brought that name with them.
Pentecostals name themselves after the incident in Acts where the Holy Spirit set fire to the world (cf. Acts 2) on the occasion of the Jewish holiday of Shavuot, q.v., which in Greek was called Fiftieth Day After Passover, hence Pentecosti. What distinguishes Pentecostals is their emphasis on what they call "speaking in tongues", which in my opin...be charitable, kempff...which they see as a continuance of the Holy Spirit's work in the world and in the lives of believers.
I agree that some Christian groups have not-so-tendentious names, including "Protestant", "Anglican", "Episcopalian" and "Lutheran". (Though to my mind "Anglican" carries a certain implication of being the church for English people, and the Episcopalians aren't the only people with bishops any more than the Baptists are the only people who baptize.)
"Pentecostal" seems to me to be in (though not a central example of) the applause-light-name category. "We are the ones who are really filled with the Holy Spirit like in the Pentecost story in the Book of Acts".
"Gospel" and "Evangelical" are absolutely applause-light names. "Our group, unlike all those others, embodies the Good News" or "Our group, unlike all those others, is faithful to the Gospels". (The terms are kinda ambiguous between those two interpretations but either way these are we-are-the-best-rah-rah-rah names.)
Anyway, I didn't mean to claim that literally every movement's name is like this. Only that many many many movements' names are.
Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.
They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.
Do you have specific examples in mind? (And not to put too fine a point on it, do you think there's a chance that you might be wrong about this assertion? You've expressed it very confidently...)
It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.
In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".
Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.
Part of the concern is there's no one "AI". There is frontier that keeps advancing. So "it" (the AI frontier in the year 2036) probably will be benign, but that "it" will advance and change. Then the law of large numbers is working against you, as you keep rolling the dice and hoping it's not a 1 each time. The dice rolls aren't i.i.d., of course, but they're probably not as correlated as we would like, and that's a problem as we keep rolling the dice. The analogy would be nuclear weapons. They won't get used in the next 10 years most likely, but on a 200 year time-frame it's a big deal as far as species-level risks go, which is what they're talking about here.
The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.
In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.
“If X, then surely Y will follow! It’s a slippery slope! We can’t allow X!”
They call out the name of the fallacy they are committing BY NAME and think that it somehow supports their conclusion?
Rationalists, mostly self-identified.
> how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?
As best as they can, because at the end of the day you still need to make decisions (You can of course choose to do nothing and ignore the risk, but that's not a safe, neutral option). Which means either you treat it as if it had a particular probability, or you waste money and effort doing things in a less effective way. It's like preparing for global warming or floods or hurricanes or what have you - yes, the error bars are wide, but at the end of the day you take the best estimate you can and get on with it, because anything else is worse.
Which is to say that you've made an estimate that the probability is, IDK, <5%, <1%, or some such.
In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.
Popper injects an epistemic humility that Bayesianism, taken alone, can miss.
I think that aligns well with your observation.
Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong? Or overlooked something major? If so, what?
(Amusingly, you seem to have avoided making any falsifiable claims in your comment, while implying that you could easily make many of them...)
https://www.yudkowsky.net/rational/bayes
These are the kind of statements I’m referring to. Happy to be falsified btw :) that’s how we learn.
Also note that Popper never called his theory falsificationism.
Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.
[1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gigerenz...
One example for all. It was claimed that a great rationalist policy is to distribute treated mosquito nets to 3rd-world-ers to help eradicate malaria. On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides. Unfortunately, rationalists forgot to ask people that live with mosquitos what they would do with such nets.
Could you recommend an article to learn more about this?
One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.
But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!
But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.
[0]: https://thezvi.substack.com/p/childhood-and-education-9-scho...
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/
[2]: https://acf.gov/sites/default/files/documents/cb/cm2023.pdf
You can convince a lot of people that you've done your homework when the medium is "an extremely blog post with a bunch of studies attached" even if the studies themselves aren't representative of reality.
BTW, this isn't a defensive posture on my part: I am not plugged in enough to even have an opinion on any rationalist community, much less identify as one.
there's only ~3300 counties in the USA.
i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US per statistia:
> In 2022, there were about 5,036 intake and screening workers in child protective services in the United States. In total, there were about 30,750 people working in child protective services in that year.
my wife's caseload (adults) "floats around fifty."
My misunderstanding then - what are you speaking to? Even reading this comment, I still don't understand.
> 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US
I don't know what the number of children in the system is. as i said in the comment you replied to, here. but the average US CPS worker caseload is 69 cases. which is over 300,000 children per year, because there are ~5000 CPS caseworkers in the US.
I was only speaking to "how do they 'run' that many investigations?" as if it's impossible. I pointed out it's possible with ~1000 caseworkers.
[0]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5087599/
also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
Maybe, but this sounds like some ideologically opposed groups slandering each other to get the moral high ground to me. The papers linked show a pretty typical racialized pattern of CPS calls (Blacks high, Asians low, Whites and Latinos somewhere between) that maybe contraindicates this, for example.
> also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
Yup I think you're right here. I think there's something fuzzy happening with conflating "CPS investigation" with "abuse", but I'm not sure where the homeschool abuse rate comes from.
The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.
By the way, this small fraction of the population is responsible for the driving the bulk of R&D.
Kinda like Mensa?
I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects
However I'm always surprised how much some people want to talk about intelligence. I mean, it's the common ground of the group in this case, but still.
These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.
This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.
It was a disaster when it happened in Russia in the 90s and its a disaster now.
>I guess I'm a rationalist now.
>Aren't you the guy who's always getting into arguments who's always right?
[1] S.A. actually quoted the person as follows: "You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?" which differs in several ways from what reverendsteveii falsely presents as a direct quotation.
And why single out AI anyway? Because it's sexy maybe? Because if I had to place bets on the collapse of humanity it would look more like the British series "The Survivors" (1975–1977) than "Terminator".
And I have this narrative ringing in my head as soon as the word pops.
https://news.ycombinator.com/item?id=42897871
You can search HN with « zizians » for more info and depth.
post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.
they also let SF get run into the ground until their detractors decided to take over.
There's kind of two clusters, one is people who talk about meditation all the time, the other is center-right people who did drugs once. I think the second group showed up because rationalists are not-so-secretly into scientific racism (because they believe anything they see with numbers in it) and they just wanted to hang out with people like that.
There is an interesting atmosphere where it feels like they observed California big tech 1000x engineer types and are trying to cargo cult the way those people behave. I'm not sure what they get out of it.
I see this in rationalist spaces too – it doesn't really make sense for people to talk about things that they believe in strongly but that 95%+ of the public also believe in (like the existence of air), or that they don't have a strong opinion on.
I am a very vocal doomer on AI because I predict with high probability it's going to be very bad for humanity and this is an opinion which, although shared by some, is quite controversial and probably only held by 30% of the public. Given the importance of the subject, my confidence, and that fact I feel the vast majority of people are even wrong or are significantly underweighting caetrosphohic risks, I have to be vocal about it.
Do I acknowledge I might be wrong? Sure, but for me the probability is low enough that I'm comfortable making very strong and unqualified statements about what I believe will happen. I suspect others in the rationalist community like Eliezer Yudkowsky think similarly.
Also, when you say you have a strong belief, does that mean you have emptied you retirement accounts and you are enjoying all you can in the moment until the end comes?
For example, I won't cross the street without 99.99% confidence that I will survive. I cross streets so many times that a lower threshold like 99% would look like insanely risky dart-into-traffic behaviour.
If an asteroid is heading for earth, then even a 25% probability of apocalyptic collision is enough that I would call it very high, and spend almost all my focus attempting to prevent that outcome. But I wouldn't empty my retirement account for the sake of hedonism because there's still a 75% chance I make it through and need to plan my retirement.
Yes, rationalism is not a substitute for humility or fallibility. However, rationalism is an important counterpoint to humanity, which is orthogonal to rationalism. But really, being rational is only binary - you cant be anything other than rational or irrational. You're either doing what's best or you're not. That's just a hard pill for most people to swallow.
To use the popular metaphor, people are drowning all over the world and we're all choosing not to save them because we don't want to ruin our shoes. Look in the mirror and try and comprehend how selfish we are.
"I haven't done anything!" - A Serious Man
You'd have to be to actually think you were being rational about everything.
So, they herald the benefits of something like giving mosquito nets to a group of people in Africa, without considering what happens a year later, whether the nets even get there (or the money is stolen), etc. etc. The reality is that essentially all improvements to human life over the past 500 years have been due to technological innovation, not direct charitable intervention. The reason is simple: technological impacts are exponential, while charity is, at best, linear.
The Covid absolutists had exactly the same problem with their thinking: almost no interventions sort of full isolation can fight back against an exponentially increasing threat.
And this is all neglecting economic substitution effects. What if the people to whom you gave mosquito nets would have bought them themselves, but instead they chose to spend their money some other way because of your charity? And, what if that other expenditure type was actually worse?
And this is before you come to the issue that Subsaharan Africa is already overpopulated. I've argued this point several times with ChatGPT o3. Once you get through its woke programming, you come to the reality of the thing: The European migration crisis is the result of liberal interventions to keep people alive.
There is no free lunch.
[] Eh, I know little about Rationalism. Please correct me.
Substitute God with AI or the concept of rationality and use "first principles"/Bayesianism in an extremely dogmatic manner similar to Catechism and you have the Rationalist/AI Alignment/Effective Altruist movement.
Ironically, this is how plenty of religious movements started off - basically as formalizations of philosophy and ethics that fused with what is basically lore and worldbuilding.
Whenever I try to get an answer of HOW (as in the attack path), I keep getting a deus ex machina. Reverting to a deus ex machina in a self purported Rationalist movement is inherently irrational. And that's where I feel the crux of the issue is - it's called a "Rationalist" movement, but rationalism (as in the process of synthesizing information using a heuristic) is secondary to the overarching theme of techno-millenarianism.
This is why I feel rationalism is for all intents and purposes a "secular religion" - it's used by people to scratch an itch that religion often was used as well, and the same Judeo-Christian tropes are basically adopted in an obfuscated manner. Unsurprisingly, Eliezer Yudkowsky is an ex-talmid.
There's nothing wrong with that, but hiding behind the guise of being "rational" is dumb when the core belief is inherently irrational.
I take it this is what you have in mind when you say that whenever you ask for an "attack path" you keep getting a deus ex machina. But it seems to me like a pretty weak basis for calling Yudkowsky's position on this a religion.
(Not all people who consider themselves rationalists agree with Yudkowsky about how big a risk prospective superintelligent AI is. Are you taking "the Rationalist movement" to mean only the ones who agree with Yudkowsky about that?)
> Unsurprisingly, Eliezer Yudkowsky is an ex-talmid
So far as I can tell this is completely untrue unless it just means "Yudkowsky is from a Jewish family". (I hope you would not endorse taking "X is from a Jewish family" as good evidence that X is irrationally prone to religious thinking.)
Agree to disagree.
> So far as I can tell this is completely untrue
I was under the impression EY attended Yeshivat Sha'alvim (the USC of Yeshivas - rigorous and well regarded, but a "warmer" student body), but that was his brother. That said, EY is absolutely from a Daatim or Chabad household given that his brother attended Yeshivat Sha'alvim - and they are not mainstream in the Orthodox Jewish community.
And the feel and zeitgeist around the rationalist community with it's veneration of a couple core people like EY or Scott Alexander does feel similar to the veneration a subset of people would do for Baba Sali or Alter Rebbe in those communities.
Let's take the chess analogy. I take it you agree that I would very reliably lose if I played Magnus Carlsen at chess; he's got more than 1000 Elo points on me. But I couldn't tell you the "attack path" he would use. I mean, I could say vague things like "probably he will spot tactical errors I make and win material, and in the unlikely event that I don't make any he will just make better moves than me and gradually improve his position until mine collapses", but that's the equivalent of things like "the AI will get some of the things it wants by being superhumanly persuasive" or "the AI will be able to figure out scientific/engineering things much better than us and that will give it an advantage" which Yudkowsky can also say. I won't be able to tell you in advance what mistakes I will make or where my pawn structure will be weak or whatever.
Does this mean that, for you, if I cared enough about my inevitable defeat at Carlsen's hands that expectation would be religious?
To me it seems obvious that it wouldn't, and that if Yudkowsky's (or other rationalists') position on AI is religious then it can't be just because one important argument they make has a step in it where they can't fill out all the details. I am pretty sure you have other things in mind too that you haven't made so explicit.
(The specific other things I've heard people cite as reasons why rationalism is really a religion also, individually and collectively, seem very unconvincing to me. But if you throw 'em all in then we are in what seems to me like more reasonable agree-to-disagree territory.)
> That said, EY is absolutely from a Daatim or Chabad household
I think holding that against him, as you seem to be doing, is contemptible. If his ideas are wrong, they're fair game, but insinuating that we should be suspicious of his ideas because of the religion of his family, which he has rejected? Please, no. That goes nowhere good.
Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.
The crazies and blind among humanity today can't think like that, its a deficiency people have, but they are still dependent on a group of people that are capable of that. A group that they are intent on ostracizing and depriving existence from in various forms.
You seem so wound up in the circular Paulo Freire based perspective that you can't think or see.
Bring things back to reality. If someone punches you in the face, you feel that fist hitting your face. You know someone punched you in the face. Its objective.
Imagine for a second and just assume that these people are right in their warnings, that everything they see is what you see, and all you can see is when you tip over a particular domino that has been tipped over in the past, a chain of dominoes falls over and at the end is the end of organized civilized society which tips over the ability to produce food.
For the purpose of this thought experiment, the end of the world is visible and almost here, and you can't change those dominoes after they've tipped, and worse you see the majority of people trying to tip those dominoes over for short term profit believing nothing they ever do can break everything.
Would you not be frothing at the mouth trying to get everyone you cared about to a point where they pry that domino up before it falls? so you and your children will survive? It is something you can't unsee, it is a thing that cannot be undone. Its coming. What do you do? If you are sane, you try with everything you have to help them keep it from toppling.
Now peal this thought back a moment, adjust it where it is still true, but you can't see it and you can only believe what you see.
Would you approach this differently given knowledge of the full consequence knowing that some people can see more than you? Would you walk out onto a seemingly visibly stable bridge that an engineer has said not to walk out on? Would you put yourself in front of a dam cracks running up the side, when an evacuation order was given? What would the consequence be for doing that if you led along your family and children to such places ignoring these things?
There are quite a lot of indirect principles that used to be taught which are no longer taught to the average person and this blinds them because they do not recognize it and recognition is the first thing you need to be able to act and adapt.
People who cannot adapt fail Darwin's fitness. Given all potential outcomes in the grand scheme of things, as complexity increases 99% of all outcomes are death vs life at 1%.
It is only through great care that we carry things forward to the future, and empower our children to be able to adapt to the environments we create.
Finally, we have knowledge of non-linear chaotic systems where adaptability fails because of hysteresis, where no matter how much one prepares the majority given sufficient size will die, and worse there are cohorts of people who are ensuring the environment we will soon live in is this type of environment.
Do you know how to build an organized society from scratch? If there is no reasonable plan, then you are planning to fail. Rather than make it worse through inaction, get out of the way so someone can make it better.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.
Thankfully, the rationalists just state their ideas and you're free to use their models properly. It's like people haven't written code at all. Just putting repeated logging all through the codebase with null checks everywhere. Just say the thing. That suffices. Conciseness rules over caveating.
Human LLMs who use idea expansion. Insufferable.
Of course that is only my opinion and I may not have captured all angles to why people are doing that. They may have reasons of their own to do that and I don't mean to say that there can never be any reasons. No animals were harmed in the manufacture of this comment to my knowledge. However, I did eat meat this afternoon which could or could not be the source of the energy required to type this comment and the reader may or may not have calorie attribution systems that do or do not allocate this comment to animal harm.
It provides answers, a framework, AND the underpinnings of "logic", luckily, this phase only lasted around 6 months for me, during a very hard and dangerous time in my life.
I basically read "from AI to zombies", and then, moved into lesswrong and the "community". It was joining the community that immediately turned me off.
- I thought Roko's basilisk was mind numbingly stupid (does anyone else that had a brief stint in the rationalist space think it's fucking INSANE that grimes and elon musk "bonded" over Roko's basilisk? Fucking depressing world we live in) - Elizer Yud's fanboys once stalked and harassed someone all over the internet, and, when confronted about it, Elizer told him he'd only tell them to stop after he issued a very specific formal apology, including a LARGE DISCLAIMER on his personal website with the apology... - Eugenics, eugenics, eugenics, eugenics, eugenics - YOU MUST DONATE TO MIRI, OTHERWISE I, ELIZER (having published no useful research), WON'T SOLVE THE ALIGNMENT PROBLEM FIRST AND THEN WE WILL ALL DIE. GIVE ALL OF YOUR MONEY TO MIRI NOWWWWWWWWWWWWWWWWWWWWWWW
It's an absolutely wild place, and honestly, I think I would say, it is difficult to define "rational" when it comes to a human being and their actions, especially in an absolute sense, and, the rationalist community is basically very similar to any other religion, or perhaps light-cult. I do not think it would be fair to say "the average rationalist is a better decision maker than the average human", especially considering most important decisions that we have to make are emotional decisions.
Also yes I agree, you hit the nail on the head. What good is rational/logical reasoning if rational and logical reasoning typically requires first principles / a formal system / axioms / priors / whatever. That kind of thing doesn't exist in the real world. It's okay to apply ideas from rationality to your life, but it isn't okay to apply ideas from rationality to "what is human existence", "what is the most important thing to do next" / whatever.
Kinda rambling so I apologize. Seeing the rationalist community seemingly underpin some of the more disgusting developments of the last few years has left me feeling a bit disturbed, and I've always wanted to talk about it but nobody irl has any idea what any of this is.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
And I'm still a superfan of Scott Aaronson.
[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...
[1] https://www.jstor.org/stable/2400494
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
The reality is that reasoning breaks down almost immediately if probabilities are not almost perfectly known (to the level that we know them in, say, quantum mechanics, or poker). So applying Bayesian reasoning to something like the number of intelligent species in the galaxy ("Drake's equation"), or the relative intelligence of AI ("the Singularity") or any such subject allows you to draw any conclusion you actually wanted to draw all along, and then find premises you like to reach there.
First-principles reasoning and the selection of convenient priors are consistently preferenced over the slow, grinding work of iterative empiricism and the humility to commit to observation before making overly broad theoretical claims.
The former let you seem right about something right now. The latter more often than not lead you to discover you are wrong (in interesting ways) much later on.
I read the NYT and rat blogs all the time. And the NYT is not the one that's far more likely to deeply engage with the research and studies on the topic.
In the most ideal circumstances, these are the same. Logic has been decomposed into model theory (the study of what is true) and proof theory (the study of what is provable). So much of modern day rationalism is unmoored proof theory. Many of them would do well to read Kant's "The Critique of Pure Reason."
Unfortunately, in the very complex systems we often deal with, what is true may not be provable and many things which are provable may not be true. This is why it's equally as important to hone your skills of discernment, and practice reckoning as well as reasoning. I think of it as hearing "a ring of truth," but this is obviously unfalsifiable and I must remain skeptical against myself when I believe I hear this. It should be a guide toward deeper investigation, not the final destination.
Many people are led astray by thinking. It is seductive. It should be more commonly said that thinking is but a conscious stumbling block on the way to unconscious perfection.
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
I hope this becomes the first ever meme with some value. We need a cult... of Provisional Humility.
Must. Increase. The. pH
Those who do so would be... based?
The level of humility in most subjects is low enough to consume glass. We would all benefit from practicing it more arduously.
I was merely adding support to what I thought was fine advice. And it is.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
Well said. Rationalism is about doing rationalism, not about being a rationalist.
Paul Graham was on the right track about that, though seemingly for different reasons (referring to "Keep Your Identity Small").
> If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
On the other hand, success is supposed to look exactly like actually being right more often.
I agree with this, and I don't think it's at odds with what I said. The point is to never stop sincerely believing you could be wrong. That you are right more often is exactly why it's such an easy trap to fall into. The tools of rationality only help as long as you are actively applying them, which requires a certain amount of humility, even in the face of success.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, or not be right, while "being wrong" can cover a very large gradient.
I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would use.
If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.
Specifically (AFAIK) a reference to Asimov’s description[1] of the idea:
> [W]hen people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.
[1] https://skepticalinquirer.org/1989/10/the-relativity-of-wron...
"Less wrong" is a concept that has a lot of connotations that just automatically appear in your mind and help you. What you wrote "It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight"." isn't bad because of Asimov said so, or because you were unaware of a reference, but because it's just bad.
I know that's what they mean at the surface level, but you just know it comes with a high degree of smugness and false modesty. "I only know that I know nothing" -- maybe, but they ain't no modern day Socrates, they are just a bunch of nerds going online with their thoughts.
I do get the joke; I think it's an instance of their feelings of "rational" superiority.
Assuming the other person didn't get the joke is very... irrational of you.
No; I know no such thing, as I have no good reason to believe it, and plenty of countering evidence.
If you want to avoid thinking you're right all the time, it doesn't help to be clever and say the logical opposite. "Rationally" it should work, but it's bad because you're still thinking about it! It's like the thinking of a pink elephant thing.
Other approaches I recommend:
* try and fail to invest in stocks
* read Meaningness's https://metarationality.com
* print out this meme and put it on your wall https://imgflip.com/i/82h43h
I don't understand how this is supposed to be relevant here. You seem to be falsely accusing me of doing such a thing, or of being motivated by simple contrarianism.
Again, your claim was:
> but you just know it comes with a high degree of smugness and false modesty
Why should I "just know" any such thing? What is your reason for "just knowing" it? It comes across that you have simply decided to assume the worst of people that you don't understand.
In other words: no.
I really don't understand all the claims that they intellectually smug and overconfident when they are the one group of people trying to do better. It really seems like all the hatred is aimed at the hubris to even try to do better.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.
[0] https://plato.stanford.edu
[1] https://iep.utm.edu
I say this as someone who had the opposite experience: I had a decent humanities education, but an abysmal mathematics education, and now I am tackling abstract mathematics myself. It's hard. I need to read sections of works multiple times. I need to sit down and try to work out the material for myself on paper.
Any impression that one discipline is easier than another probably just stems from the fact that you had good guides for the one and had the luck to learn it when your brain was really plastic. You can learn the other stuff too, just go in with the understanding that there's no royal road to philosophy just as there's no royal road to mathematics.
But if you can't even narrow the breadth of possible choices down to a few paths that can be traveled, you can't be surprised when people take the one that they know that's also easier with more immediate payoffs.
It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?
Even better, I'd like some filtering out of the parts that are clearly wrong.
And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.
For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.
Using a slightly different definition of free will, suddenly Compatibilism becomes obviously incorrect.
And now it's been reduced to quibbling over definitions, thereby reinventing much of the history of philosophy.
This is just the story of the history of philosophy. Going back hundreds of years. See Kant and Hegel for notable examples.
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
[1] https://en.wikipedia.org/wiki/Great_Conversation
You can be right or wrong in math. You have can an opinion in English.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
I’ll also respond to the silent downvoters apparent disagreement. CFAR holds workshops and a summer camp for teaching rationality tools. In HPMoR Harry discusses the way he thinks and why. I read it as more of a way to discuss EY’s views in fiction as much as fiction itself.
Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.
https://hpmor.com/
However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"
https://en.wikipedia.org/wiki/Rhinoceros_(play)
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
https://archive.org/details/on-tyranny-twenty-lessons-from-t...
Which I did post top-level here on November 7th - https://news.ycombinator.com/item?id=42071791
Unfortunately it didn't a lot of traction and dang told me that there wasn't a way to re-up or "second chance" the post due to the HN policy on posts "correlated with political conflict".
Still, I'm glad I now know the reference.
1. They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group. People tend not to like being in other peoples' out-groups.
2. They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
3. They're nerds. Whatever has historically caused nerds to be bullied/ostracized, they probably have.
The rationalist community is most definitely not exclusive. You can join it by declaring yourself to be a rationalist, posting blogs with "epistemic status" taglines, and calling yourself a rationalist.
The criticisms are not because it's a cool club that won't let people in.
> They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
Herein lies one of the problems with the rationalist community: For all of their talk about heterodox ideas and entertaining different viewpoints, they are remarkably lockstep in many of their opinions.
From the outside, it's easy to see how one rationalist blogger plants the seed of some topic and then it gets adopted by the others as fact. A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something, even though actual experts picked it apart from the beginning and showed how the author was misinterpreting studies, abusing statistics, and ignoring more prominent factors.
The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence and they do this very frequently.
I agree, and didn't intend to express otherwise. It's not an exclusive community, but it is a community, and if you aren't in it you are in the out-group.
> The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence
I don't know if this is true or not, but if it is I don't think it's why people scorn them. Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
Yes, most people don't care how you arrived at an opinion, they rather care about the practical impact of said opinion. IMO this is largely a good thing.
You can logically push yourself to just about any opinion, even absolutely horrific ones. Everyone has implicit biases and everyone is going to start at a different starting point. The problem with string of logic for real-world phenomena is that you HAVE to make assumptions. Like, thousands of them. Because real-world phenomena are complex and your model is simple. Which assumptions you choose to make and in which directions are completely unknown, even to you, the one making said assumptions.
Ultimately most people aren't going to sit here and try to psychoanalyze why you made the assumptions you made and if you were abused in childhood or deduce which country you grew up in or whatever. It's too much work and it's pointless - you yourself don't know, so how would we know?
So, instead, we just look at the end opinion. If it's crazy, people are just going to call you crazy. Which I think is fair.
As proof of what, exactly? And where is your evidence that such a thing happened?
> while ignoring contradictory evidence and they do this very frequently.
The evidence available to me suggests that the rationalist community was not at all "lockstep" as regards the evaluation of SMTM's hypothesis.
The followup post from the same author https://www.lesswrong.com/posts/NRrbJJWnaSorrqvtZ/on-not-get... is currently at a score of +306, again higher than either of those other pro-lithium-hypothesis posts.
Or maybe this https://substack.com/home/post/p-39247037 (I admit I don't know for sure whether the author considers himself a rationalist, but I found the link via a search for whether Scott Alexander had written anything about the lithium theory, which it looks like he hasn't, which turned this up in the subreddit dedicated to his writing).
Speaking of which, I can't find any sign that they got an ACX grant. I can find https://www.astralcodexten.com/p/acx-grants-the-first-half which is basically "hey, here are some interesting projects we didn't give any money to, with a one-paragraph pitch from each" and one of the things there is "Slime Mold Time Mold" talking about lithium; incidentally, the comments there are also pretty skeptical.
So I'm not really seeing this "gets adopted by the others as fact" thing in this case; it looks to me as if some people proposed this hypothesis, some other people said "eh, doesn't look right to me", and rationalists' attitude was mostly "interesting idea but probably wrong". What am I missing here?
That post came out a year later, in response to the absurdity of the situation. The very introduction of that post has multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
One of the links is a Eliezer Yudkowsky blog praising the work, which now includes an edited-in disclaimer at the top about how he was mistaken: https://www.lesswrong.com/posts/kjmpq33kHg7YpeRYW/briefly-ra...
Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
The SMTM series started in July 2021 and finished in November 2021; there was also a paper, similar enough that I assume it's by the same people, from July 2021. The first of those "multiple links" is from July 2021, but the second is from January 2022 and the third from May 2022. The critical post is from June 2022. I agree it's a year later than something but I'm not seeing that the SMTM theory was "spreading ... with little question" a year before it.
The "multiple links" you mention -- the actual number is three -- are the two I mentioned before and a third that (my apologies!) I had somehow not noticed. That third one is at +74 karma, again much lower than the later critical post, and it doesn't endorse the lithium theory.
The one written by E.Y. is the second. Quite aside from the later disclaimer, it's hardly an uncritical endorsement: "you are still probably saying "Wait, lithium?" This is still mostly my own reaction, honestly." and "low-probability massive-high-value gamble".
What about the first post? That one's pretty positive, but to me it reads as "here's an interesting theory; it sounds plausible to me but I am not an expert" rather than "here's a theory that is probably right", still less "here's a theory that is definitely right".
The comments, likewise, don't look to me like lockstep uncritical acceptance. I see "here are some interesting experiments one could do to check this" and "something like this seems plausible but I bet the actual culprit is vegetable oils" and "something like this seems plausible but I bet the actual culprit is rising CO2 levels" and "I bet it's corn somehow" and "quite convincing but didn't really rule out the obvious rival hypothesis" and so forth; I don't think a single one of the comments is straightforwardly agreeing with the theory.
If you've found something Scott Alexander wrote about this then I'd be interested to see it. All I found was that (contrary to what you claimed above) it looks like ACX Grants declined to fund exploration of the lithium theory but included that proposal in a list of "interesting things we didn't fund".
So I'm just not seeing this lockstep thing you claim. Maybe I'm looking in the wrong places. The specific things you've cited don't seem like they support it: you said there was an ACX grant but there wasn't; you say the links in the intro to that critical post show the theory spreading with little question, but what they actually show is one person saying "here's an interesting theory but I'm not an expert", E.Y. saying "here's a theory that's probably wrong but worth looking into" (and later changing his mind), and another person saying "I put together some data that might be relevant"; in every case the comments are full of people not agreeing with the lithium theory.
By "multiple links" you're referring to the same "two posts". Again, they weren't as popular, nor were they as uncritical as you describe. From Yudkowsky's post, for example:
> If you know about the actual epidemiology of obesity and how ridiculous it makes the gluttony theory look, you are still probably saying "Wait, lithium?" This is still mostly my own reaction, honestly.... If some weird person wants to go investigate, I think money should be thrown at them, both to check the low-probability massive-high-value gamble
Yudkowsky's argument is emphatically not that the lithium claim is true. He was merely advocating for someone to fund a study. He explicitly describes the claim as "low-probability", and advocates on the basis of a (admittedly clearly subjective) expected-value calculation.
> One of the links is a Eliezer Yudkowsky blog praising the work
That does not constitute "praise" of the work. Yudkowsky only praised the fact that someone was bucking the trend of
> almost nobody is investigating it in a way that takes the epidemiological facts seriously and elevates those above moralistic gluttony theories
.
> Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
Nobody claimed that Yudkowsky ignored the theory.
You mean an empirical observation
It's like they're crying wolf but can't prove there's actually a wolf, only vague signs of one, but if the wolf ever becomes visible it will be way too late to do anything. Obviously no one is going to respect a group like that and many people will despise them.
Scott Aaronson - in theory someone HN should be a huge fan of, from all reports a super nice and extremely intelligent guy who knows a staggering amount about quantum mechanics - says he likes rationality, and gets less charity than Mr. Beast. Huh?
[1]: https://news.ycombinator.com/item?id=41549649
Anyway, Mr. Beast doesn't really pretend to be more than what he is afaik. In contrast, the Rationalist tendency to use mathematics (especially Bayes's theorem) as window dressing is really, really annoying.
The gist is that if people are really different from us then we tend to be cool with them. But if they're close to us - but not quite the same - then they tend to annoy us. Hacker News people are close enough to Rationalists that HN people find them annoying.
It's the same reason why e.g. Hitler-style Neo Nazis can have a beer with Black Nationalists, but they tend to despise Klan-style Neo Nazis. Or why Sunni and Shia Muslims have issues with each other but neither group really cares about Indigenous American religions or whatever.
* https://slatestarcodex.com/2014/09/30/i-can-tolerate-anythin...
Either way, as an ideology it must be stopped. It should not be treated with kids gloves, it is an ideology that is actively influencing the ruling elites right now (JD Vance, Musk, Thiel are part of this cult, and also simultaneously believe in German-style Nazism, which is broadly compatible with RA). The only silver lining is that some of their ideas about power-seeking tactics are so ineffective they will never work -- in other words, humanity will prevail over these ghouls, because they came in with so many bad assumptions that they've lost touch with reality.
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
0.o
.. I think it is plausible that there are people (readers) that find other people (bloggers) basically always right, and that would be the first think they would say to them if they met them. n=1, but there are some bloggers that I think are basically always right, and I am socially bad, so there is no way to tell what would I blurt out if I met them.
Someone spending a lot of time to build one or multiple skills doesn't make them an expert on everything, but when they start talking like they are an expert on everything because of the perceived difficulties of one or more skills then red flags start to pop up and most reasonable people will notice them and swiftly call them out.
For example Elon Musk saying "At this point I think I know more about manufacturing than anyone currently alive on earth" even if you rationalize that as an out of context deadpan joke it's still completely correct to call that out as nonsense at the very least.
The more a person rationalizes statements like ("AI WILL KILL US ALL") these made by a person or cult the more likely it is that they are a cult member and they lack independent critical thinking, as they outsourced their thinking to group. Maybe their thinking is "the best thoughts", in-fact it probably is, but it's dependent on the group so their individual thinking muscle is weaken, which increases their morbidity (Airstricking a data center will get you killed or arrested by the US Gov. So it's better for the individual to question such statements rather than try to rationalize them using unprovable nonsense like god or AGI).
Which of course the blog article is not, but then at least the complaint wouldn't sound so obviously shallow.
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
For more info, the Behind the Bastards podcast [2] did a pretty good series on how the Zizians sprung up out of the Bay area Rationalist scene. I'd highly recommend giving it a listen if you want a non-rationalist perspective on the Rationalist movement.
[1]: https://en.wikipedia.org/wiki/Zizians [2]: https://www.iheart.com/podcast/105-behind-the-bastards-29236...
Those are only named cults though; they just love self-organizing into such patterns. Of course, living in group homes is a "rational" response to Bay Area rents.
Also, I'm not sure if you did this intentionally, but Ziz is a trans woman. She may have done some awful shit, but that doesn't justify misgendering her.
> She may have done some awful shit
Murdering several people is slightly worse than "awful shit".
The Behind the Bastards podcast works by having the podcaster invite a guest on the show and tell the story of an individual (or movement, like the Zizians) to provide a live reaction. And in the discussion about the Zizians, the light-bulb moment for the guest, the point where they made the connection "oh, now I can see how this story is going to end up with dead people," happens well before Ziz breaks with the Rationalists.
She ultimately breaks with the Rationalists because they don't view animal welfare as important as a priority as she does. But it's from the Rationalists that she picks up on the notion that some people are net negatives to society... and that if you're a net negative to society, then perhaps you're better off dead. It's not that far a leap to go from there to "it's okay for me to kill people if they are a net negative to society [i.e., they disagree with me]."
That belief has nothing to do specifically with rationalism. (In fact, I think most people believe that some people are net negative for society [otherwise, why prisons?], but there is no indication that this belief would be more prevalent for rationalists.)
> the fertile soil which is perfect for growing cults
This is true but it's not rationalism, it's just that they're from Berkeley. As far as I can tell if you live in Berkeley you just end up joining a cult.
Most of the rationalists I met in the Bay Area moved there specifically to be closer to the community.
Cult member: It's not a cult! It's an organization that promotes love and..
Hank Hill: This is it.
Edit: Oh, but you call him "Guru" ... so on reflection you were probably (?) making the same point... (whoosh, sorry).
You don't understand how anxious the rationalist community was around that time. We're not talking self-assured confident people here. These articles were written primarily to calm down people who were panickedly asking "we're not a cult, are we" approximately every five minutes.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
*Guess I’m a rationalist now.
The contempt, the general lack of curiosity and the violence of the bold sweeping statements people will make here are mind-boggling.
Honestly, I find the Hacker News comments in recent years to be most enlightening because so many comments come from people who spent years immersed in rationalist communities.
For years one of my friend groups was deep into LessWrong and SSC. I've read countless blog posts and other content out of those groups.
Yet every time I write about it, I'm dismissed as an uninformed outsider. It's an interesting group of people who like to criticize and dissect other groups, but they don't take kindly to anyone questioning their own circles.
No; you're being dismissed as someone who is entirely too credulous about arguments that don't hold up to scrutiny.
Edit: and as someone who doesn't understand basics about what rationalists are trying to accomplish in certain contexts (like the concept of a calibration curve re the example you brought up of https://www.astralcodexten.com/p/grading-my-2021-predictions). You come across (charitably) as having missed the point, because you have.
If you open any thread related to Zig or Odin or C++ you can usually ctrl-F "Rust" and find someone having an argument about how Rust is better.
EDIT: Didn't have to look very far back for an example: https://news.ycombinator.com/item?id=44319008
I will say as someone who has been programming before we had standardized C++ that “programming communities” aren’t my cup of tea. I like the passion and enthusiasm but it would be good for some of those lads to have a drag, see a shrink and get some nookie.
https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
(Disclaimer: Chivers kinda likes us, so if you like one book you'll probably dislike the other.)
You mean "probably the book that confirms my biases the most"
If anyone wants to actually engage with the topic instead of trying to ad-hominem it away, I suggest at least reading Scott Alexander's own words on why he so frequently engages in neoreactionary topics: https://www.reddit.com/r/SneerClub/comments/lm36nk/comment/g...
Some select quotes:
> First is a purely selfish reason - my blog gets about 5x more hits and new followers when I write about Reaction or gender than it does when I write about anything else, and writing about gender is horrible. Blog followers are useful to me because they expand my ability to spread important ideas and network with important people.
> Third is that I want to spread the good parts of Reactionary thought
> Despite considering myself pretty smart and clueful, I constantly learn new and important things (like the crime stuff, or the WWII history, or the HBD) from the Reactionaries. Anything that gives you a constant stream of very important new insights is something you grab as tight as you can and never let go of.
In this case, HBD means "human biodiversity" which is the alt-right's preferred term for racialism, or the division of humans into races with special attention to the relative intelligence of those different races. This is an oddly recurring theme on Scott Alexander's work. He even wrote a coded blog post to his followers about how he was going to deny it publicly while privately holding it to be very correct.
This is not a fair or accurate characterization of the criticism you're referring to.
> All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
They are not doing any such thing. The content is being dismissed because it has been repeatedly evaluated before and found baseless. The arguments are acknowledged as specious. Sandifer makes claims that are not supported by the evidence and are in fact directly contradicted by the evidence.
The premise, with an attempt to tie capital-R Rationalists to the neoreactionaries though a sort of guilt by association, is frankly weird: Scott Alexander is well-known among the former to be essentially the only prominent figure that takes the latter seriously—seriously enough, that is, to write a large as-well-stated-as-possible survey[1] followed by a humongous point-by-point refutation[2,3]; whereas the “cult leader” of the rationalists, Yudkowsky, is on the record as despising neoreactionaries to the point of refusing to discuss their views. (As far as recent events, Alexander wrote a scathing review of Yarvin’s involvement in Trumpist politics[4] whose main thrust is that Yarvin has betrayed basically everything he advocated for.)
The story of the book’s conception also severely strains an assumption of good faith[5]: the author, Elizabeth Sandifer, explicitly says it was to a large extent inspired, sourced, and edited by David Gerard, a prominent contributor to RationalWiki and r/SneerClub (the “sneerers” mentioned in TFA) and Wikipedia administrator who after years of edit-warring got topic-banned from editing articles about Scott Alexander (Scott Siskind) for conflict of interest and defamation[6] (including adding links to the book as a source for statements on Wikipedia about links between rationalists and neoreaction). Elizabeth Sandifer herself got banned for doxxing a Wikipedia editor during Gerard's earlier edit war at the time of Manning's gender transition, for which Gerard was also sanctioned[7].
[1] https://slatestarcodex.com/2013/03/03/reactionary-philosophy...
[2] https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...
[3] https://slatestarcodex.com/2013/10/24/some-preliminary-respo...
[4] https://www.astralcodexten.com/p/moldbug-sold-out
[5] https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
[6] https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
[7] https://en.wikipedia.org/wiki/Wikipedia:Arbitration/Requests...
Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
It's no secret that Scott Alexander had a bit of a fixation on neoreactionary content for years. The leaked e-mails showed he believed there to be "gold" in some of their ideas and he enjoyed the extra traffic it brought to his blog. I know the rationalist community has been working hard to distance themselves from that era publicly, but dismissing that chapter of the history because it feels too much like a "smear" or because we're not supposed to like the author feels extremely hypocritical given the context.
Curious to read these. Got a source?
Part of evaluating unusual ideas is that you have to get really good at ignoring bad ones. So when somebody writes a book called "Neoreaction: a Basilisk" and claims that it's about rationality, I make a very simple expected-value calculation.
No. Rationalists do say that it's important to do those things, because that's true. But it is not a defense of a "fixation on neoreactionary topics", because there is no such fixation. It only comes across as a fixation to people who are unwilling to even understand what they are denigrating.
You will note that Scott Alexander is heavily critical of neoreaction.
> Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
No. Nobody said that those criticism should be ignored. What was said is that those criticism are invalid, because they are. It is not ad-hominem against Sandifer to point out that Sandifer is trying to insinuate untrue things about Alexander. It is simply observing reality. Sandifer attempts to describe Alexander, Yudkowsky et. al. as supportive of neoreactionary thought. In reality, Alexander, Yudkowsky et. al. are strongly-critical-at-best of neoreactionary thought.
> The leaked e-mails showed he believed there to be "gold" in some of their ideas
This is clutching at straws. Alexander wrote https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f... , in 2013.
You are engaging in the same kind of semantic games that Sandifer does. Please stop.
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
You are presenting a highly contentious worldview for the sake of smearing an outgroup. Please don't. Further, the smear relies on guilt by association that many (including myself) would consider invalid on principle, and which further doesn't even bear out on cursory examination.
At least take a moment to see how others view the issue. "Reliable Sources: How Wikipedia Admin David Gerard Launders His Grudges Into the Public Record" https://www.tracingwoodgrains.com/p/reliable-sources-how-wik... includes lengthy commentary on Sandifer (a close associate of Gerard)'s involvement with rationalism, and specifically on the work you cite and its biases.
Took me a few years to realize how cultish it all felt and that I am somewhat happy my edgy atheist contrarian personality overwrote my dicks thinking with that crowd.
https://en.wikipedia.org/wiki/Rationalist_community
and not:
https://en.wikipedia.org/wiki/Rationalism
right?
But the words are too close together, so this is about as lost a battle as "hacker".
I don't think it's actually true that rationalists-in-this-sense commonly use "rationality" to refer to the movement, though they do often use it to refer to what the movement is trying to do.
So you say it should be possible to avoid making this claim. I agree, and I believe Eliezer tried! Unfortunately, it was attributed to him anyway.
Asking "What do they do?" is like asking "What do Hackernewsers do?"
It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.
(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)
Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.
The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.
Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.
-Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself. I quote John Lennon, "I don't believe in Beatles, I just believe in me." Good point there. After all, he was the walrus. I could be the walrus. I'd still have to bum rides off people.
I am perpetually fascinated by the way rationalists love to dismiss critics by pointing out that they met some people in person and they seemed nice.
It's such a bizarre meme.
Curtis Yarvin went to one of the "Vibecamp" rationalist gatherings, was nice to some prominent Twitter rationalists, and now they are ardent defenders of him on Twitter. Their entire argument is "I met him and he was nice".
It's mind boggling that the rationalist part of their philosophy goes out the window as soon as the lines are drawn between in-group and out-group.
Bringing up Cade Metz is a perennial favorite signal because of how effectively they turned it into a "you're either with us or against us" battle, completely ignoring any valid arguments Cade Metz may have been brought to the table. Then you look at how they treat Neoreactionaries and how we're supposed to look past our disdain for them and focus on the possible good things in their arguments, and you realize maybe this entire movement isn't really about truth-seeking as much as they think it is.
There's an -ism for that.
Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
It's Buddhism.
https://en.wikipedia.org/wiki/Anattā
> Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
That is indeed a problem with it. The Buddhist solution is to make you promise not to do that.
https://en.wikipedia.org/wiki/Bodhicitta
And the (well, a) term for the entire problem is "non-dual awareness".
* Group are "special"
* Centered around a charismatic leader
* Weird sex stuff
Guys we have a cult!
* Communal living
* Sacred texts & knowledge
* Doomsday predictions
* Guru/prophet lives on the largesse of followers
It's rich for a group that claims to reason based on priors to completely ignore that they possess all the major defining characteristics of a cult.
1. Apocalyptic world view.
2. Charismatic and/or exploitative leader.
3. Insularity.
4. Esoteric jargon.
5. Lack of transparency or accountability (often about finances or governance).
6. Communal living arrangements.
7. Sexual mores outside social norms, especially around the leader.
8. Schismatic offshoots.
9. Outsized appeal and/or outreach to the socially vulnerable.
They have separate origins, but have come to overlap.
"Computer people who think that because they're smart in one area they have useful opinions on anything else, holding forth with great certainty about stuff they have zero undertanding or insight into"
And you know what, I think they're right. The rest of you are always doing that sort of thing!
(/s, if it's necessary...)
"In particular, several women in the community have made allegations of sexual misconduct, including abuse and harassment, which they describe as pervasive and condoned."
There's weird sex stuff, logically, it's a cult.
They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians
[1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...
If you take a look at the biodiversity survey here https://reflectivealtruism.com/2024/12/27/human-biodiversity...
1/3 of the users at acx actually support flawed scientific theories that would explain iq on a scientific basis. The Lynn study on iq is also quite flawed https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
If you want to read about human biodiversity, https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational
https://www.researchgate.net/figure/Example-Ancestry-PCA-plo...
We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/
A 1994 study based on 120 alleles found the two most distant groups were Mbuti pygmies and Papua New Guineans at a Fst of 0.4573.
https://en.wikipedia.org/wiki/File:Full_Fst_Average.png
In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.
Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.
Rather than judging an individual on their actual intelligence, these kinds of statistical trends allow you to justify judging an individual based on their race, because you feel you can credibly claim that race is an acceptable proxy for their genome, is an acceptable proxy for their intelligence.
Or for their trustworthiness, or creativity, or sexuality, or dutifulness, or compassion, or aggressiveness, or alacrity, or humility, etc etc.
When you treat a person like a people, that’s still prejudice.
> Rather than judging an individual on their actual intelligence
Actual intelligence is hard to know! However, lots of factors allow you to make a rapid initial estimate of their actual intelligence, which you can then refine as required.
(When the factors include apparent genetic heritage, this is called "racism" and society doesn't like it. But that doesn't mean it doesn't work, just that you can get fired and banned for doing it.)
((This is of course why we must allow IQ tests for hiring; then there's no need to pay attention to skin color, so liberals should be all for it.))
Yes, actually. If an idea sounds like it can be used to commit crimes against humanity, you should pause. You should reassess said idea multiple times. You should be skeptical. You shouldn't ignore that feeling.
What a lot of people are missing is intent - the human element. Why were these studies conducted? Who conducted them?
If someone insane conducts a study then yes - that is absolutely grounds to be skeptical of said study. It's perfectly rationale. If extremely racist people produce studies which just so happen to be racist, we should take a step back and go "hmm".
Being right or being correct is one thing, but it's not absolutely valuable. The end-result and how "bad" it is also matters, and often times it matters more. And, elephant in the room, nobody actually knows if they're right. Making logical conclusions isn't so, because you are forced to make thousands of assumptions.
You might be right, you might not be. Let's all have some humility.
And if you're saying "well those are just repackaged IQ tests, so doesn't it count", then 1. it sure seems like IQ tests are illegal then, but 2. it also seems like they're so useful that companies are trying to smuggle them in anyway?
IQ tests are not actual measurements of anything; this is both because nobody has a rigorous working definition of intelligence and because nobody's figured out a universal method of measuring achievement of what insufficient definitions we have. Their proponents are more interested in pigeonholing people than actually measuring anything anyway.
And as a hiring manager, I'd hire an idiot who is good at the job over a genius who isn't.
IQ as a metric is correlated with almost every life outcome. It's one of the most reliable metrics in psychology.
As a hiring manager, if you think an idiot can be good at the job, you either hire for an undemanding job or I'm not sure if you're good at yours.
I'm not even saying you're wrong (I think you are, but I don't have to defend that argument). I'm just saying the level of epistemic certainty you kicked this subthread off with was unwarranted. You know, "most reliable metrics in psychology" and all that.
But also sure, I tend to assert my opinions pretty strongly in part to invite pushback.
My own view is "IQ is real and massively impactful", because of the people I've read on the topic, my understanding of biology, sociology and history, and my experience in general, but I haven't kept a list of citations to document my trajectory there.
Intelligence is not a single axis thing. IQ test results are significantly influenced by socioeconomic factors. "Actual intelligence is hard to know" because it doesn't exist.
I have never yet known scientific racism to produce true results. I have known a lot of people to say the sorts of things you're saying: evidence-free claims that racism is fine so long as you're doing the Good Racism that Actually Works™, I Promise, This Time It's Not Prejudice Because It's Justified®.
No candidate genetic correlate of the g factor has ever replicated. That should be a massive flashing warning sign that – rather than having identified an elusive fact about reality that just so happens to not appear in any rigorous study – maybe you're falling afoul of the same in-group/out-group bias as nearly every group of humans since records begin.
Since I have no reason to believe your heuristic is accurate, we can stop there. However, to further underline that you're not thinking rationally: even if blue people were (on average) 2× as capable at spacial rotation-based office jobs than green people, it still wouldn't be a good idea to start with the skin colour prior and update from there, because that would lead to the creation of caste systems, which hinder social mobility. Even if scientific racism worked (which it hasn't to date!), the rational approach would still be to judge people on their own merits.
If you find it hard to assess the competence of your subordinates, to the point where you're resorting to population-level stereotypes to make hiring decisions, you're an incompetent manager and should find another job.
That would be remarkable! Do you have a write-up/preprint on your protocol?
• Drill.
• Goto step 1.
Does this make the child "more intelligent"? Not in any meaningful way! But they get better at IQ tests.
It's a fairly common protocol. I can hardly be said to have invented it: I was put through it. (Sure, I came up with a few tricks for solving IQ-type problems that weren't in the instruction books, but those tricks too can be taught.)
I really don't understand why people think IQ test results are meaningful. They're among the most obvious cases of Goodhart's law that I know. Make up a sport that most kids won't have practised before, measure performance, and probably that's about as correlated with the (fictitious) "g factor" as IQ tests are.
The problem with "I've gone through this" is it's hard to analyze the counterfactual.
I mean, there aren't that many questions on Raven, you could memorize them all, particularly if you've got the kind of intelligence that actors have -- being able to memorize your lines. (And that's something, I have a 1950-ish book about the television industry that makes a point that people expect performers to be a "quick study", you'd better know your lines really well and not have to be told twice that you are expected to do this or that. That's different from, say, being able to solve really complex math problems.)
I'd consider it well plausible that top movie stars are also very smart.
Saying in 2025 that the study is still debated is not only racist, but dishonest as well. It's not debated, it's junk
This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.
A lot of people who like to think of themselves as skeptical could also be categorized as contrarian -- they are skeptical of institutions, and if someone is outside an institution, that automatically gives them a certain credibility.
There are three or four logical fallacies in the mix, and if you throw in confirmation bias because what the one side says appeals to your own prior beliefs, it is really, really easy to convince yourself that you're the steely-eyed rationalist perceiving the world correctly while everyone else is deluded by their biases.
https://scottaaronson.blog/?p=2537
He’s clearly identifying as a rationalist there
My old roommate worked for Open Phil, and was obsessed with AI Safety and really into Bitcoin. I never was. We still had interesting arguments about it all the time. Most of the time we just argued until we got to the axioms we disagreed on, and that was that.
You don't have to agree with the Rationalist™ perspective to apply philosophically rigorous thinking. You can be friends and allies with them without agreeing with all their views. There are strong arguments for why frequentism may be more applicable than bayesianism in different domains. Or why transhumanism is a pipe dream. They are still conversations that are worthwhile as long as you're not so confident in your position that you think you might learn something.
Bring up the rationalist community within academic philosophy circles and you'll get a lot of groans.
The fun part about rationalists is that they like to go back to first principles and rediscover basics. The less fun part is that they'll ignore all of the existing work and pretend they're going to figure it all out themselves, often with weird results.
This leaves philosophy people endlessly frustrated as the rationalists write long essays about really basic philosophy concepts as if they're breaking new ground, while ignoring a lot of very interesting work that could have made the topic much more interesting to discuss.
Right, and "actual philosophers" like Sartre and Heidegger _never_ did that. Ever.
"Being and Nothingness" and "Being and Time" are both short enough to fit into a couple tweets, right?
</irony>
My point is that, yes, while it may be a bit annoying in general (lord knows how many times I rolled my eyes at my old roommate talking about trans-humanism), the idea that this Rationalist™ movement "thinking about things philosophically" is controversial is just weird. That they seem to care about a philosophical approach to thinking about things, and maybe didn't get degrees and maybe don't understand much background while forming their own little school, seems as unremarkable is it is uncontroversial.
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names.
Please don't fulminate. Please don't sneer...
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
https://news.ycombinator.com/newsguidelines.html
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
And boy are they extremely interested in ONLY those six years.
>liberal zionist
hmmmm
Yeah, this surprises absolutely nobody.
Not his fault that people deemed it interesting enough to upvote to the front page of HN.
Not incidental!
Recognizing we all take a step of faith to move outside of solipsism into a relationship with others should humble us.
Until?
My understanding of "Rationalists" is that they're followers of rationalism; that is, that truth can be understood only through intellectual deduction, rather than sensory experience.
I'm wondering if this is a _different_ kind of "Rationalist." Can someone explain?
I didn't attend LessOnline since I'm not active on LessWrong nor identify as a rationalist - but I did attended a GPU programming course in the "summer camp" portion of the week, and the Manifest conference (my primary interest).
My experience generally aligns with Scott's view, the community is friendly and welcoming, but I had one strange encounter. There was some time allocated to meet with other attendees at Manifest who resided in the same part of the world (not the bay area). I ended up surrounded by a group of 5-6 folks who appeared to be friends already, had been a part of the Rationalist movement for a few years, and had attended LessOnline the previous weekend. They spent most of the hour critiquing and comparing their "quality of conversations" at LessOnline with the less Rationalist-y, more prediction market & trading focused Manifest event. Completely unaware or unwelcoming of my presence as an outsider, they essentially came to the conclusion that a lot of the Manifest crowd were dummies and were - on average - "more wrong" than themselves. It was all very strange, cult-y, pseudo-intellectual, and lacking in self-awareness.
All that said, the experience at Summer Camp and Manifest was a net positive, but there is some credence to sneers aimed at the Rationalist community.
I can't say I'm surprised.
Give me strength. So much hubris with these guys (and they’re almost always guys).
I would have assumed that a rationalist would look for truth and not correctness.
Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.
The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.
Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.
Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"
Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.
It goes like this:
(1) Assert a set of priors (with emphasis on the word assert).
(2) Reason from those priors to some conclusion.
(3) Seamlessly, without skipping a beat, take that solution as valid because the reasoning appears consistent and make that part of a new set of priors.
(4) Repeat, or rather recurse since the new set of priors is built on previous iterations.
The entire concept of science is founded on the idea that you can't do that. You have to stop and touch grass, which in science means making observations or doing experiments if possible. You have to see if the conclusion you reached actually matches reality in any meaningful way. That's because reason alone is fragile. As any programmer knows, a single error or a single mistaken prior propagates and renders the entire tree invalid. Do this recursively and one error anywhere in this crystalline structure means you've built a gigantic tower of bullshit.
I compare it to the Gish gallop because of how enthusiastically they do it, and how by doing it so fast it becomes hard to try to argue against. You end up having to try to counter a firehose of Oh So Very Smart complicated exquisitely reasoned nonsense.
Or you can just, you know, conclude that this entire method of determining truth is invalid and throw the entire thing in the trash.
A good "razor" for this kind of thing is to judge it by its fruit. So far the fruit is AI hysteria, cults like the Zizians, neoreactionary political ideology, Sam Bankman Fried, etc. Has anything good or useful come from any of this?
I read the first third of HPMOR. I stopped because I found the writing poor, but more importantly, it didn't "open my mind" to any higher-order way of rationalist thinking. My takeaway was "Yup, the original HP story was full of inconsistencies and stupidities, and you get a different story if the characters were actually smart."
I've read a bunch of EY essays and a lot of lesswrong posts, trying to figure out what is the mind-shattering idea.
* The map is not the territory --> of course it isn't.
* Update your beliefs based on evidence --> who disagrees with this? (with exception on religion)
* People are biased and we need to overcome that --> another obvious statement
* Make decisions based on evidence and towards your desired outcomes --> thanks for the tip?
Seems to me this whole philosophy can be captured in about half page of notes, which most people would nod and say "yup, makes sense."
Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.
A single failed prediction should revoke the label.
The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.
But if we put aside the narcissistic traits, lack of intellectual humility, religious undertones and (paradoxically) appeal to emotional responses with apocalyptic framing, the whole thing is still irrelevant BS.
They work in a vacuum, on either false or artificial premises with nothing to back their claims except long strings of syllogism.
This is not Science, no measurements, no experiments, no validation, zero value apart from maybe intellectual stimulation and socialisation for nerds with too much free time…
Expecting rational thought to correspond to reality is like expecting a 6 million line program written in a hypothetical programming language invented in the 1700s to run bug free on a turing machine.
Tooling matters.
Apart from a charismatic leader, a cult (in the colloquial meaning) needs a business model, and very often, a sense of separation from, and lack of accountability to those who are outside the cult, which provides conveniently simpler environment under which the cults ideas operate. A sort of "complexity filter" at the entry gate.
I'm not sure how the Rationalists compare to those criteria, but I'd be curious to find out.
One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.
I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.
> I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?
The reason they landed on a not-so-rational risk to humanity is because it fulfilled the psycho-social need to have a "terrible burden" that binds the group together.
It's one of the reasons religious groups will get caught up on The Rapture or whatever, instead of eradicating poverty.
Fair warning: when you turn over some of the rocks here you find squirming, slithering things that should not be given access to the light.
> squirming, slithering things that should not be given access to the light.
;)
"Here are some labels I identify as"
So they arent rational enough to understand first principles don't objectively exist.
They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.
I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.
I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.
"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."
For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.
These people are just narcissists who use (often pseudo)intellectualism as the vehicle for their narcissism.
https://www.ohchr.org/en/press-releases/2024/11/un-special-c...
In other words, your question ignores so much nuance that it’s a red herring IMO.
As others have pointed out, the fact that you would like to make light of cities being decimated and innocent civilians being murdered at scale in itself suggests a lot about your inability to concretize the reality of human existence beyond yourself (lack of empathy). It's this kind of outright callousness toward actual human beings that I think many of these so called "rationalists" share. I can't fault them too much. After all, when your approach to social problems is highly if not strictly quantitative you are already primed to nullify your own aptitude for empathy, since you view other human beings as nothing more than numerical quantities whenever you attempt to address their problems.
I have seen no defense for what's happening in gaza that anyone who actually values human life, for all humans, would find rational. Recall the root of the word ratio—in proportion. What is happening in this case is quite blatantly a matter of an inproportinate response.
Still, for the record, other independent observers have documented the practices and explained why they don't meet the definition of genocide, John Spencer and Natasha Hausdorff to name two examples. It seems by no means clear that it's valid to make a claim of genocide. I certainly wouldn't unless I was really, really certain of my claim, because to get such a claim wrong is equally egregious to denying a true genocide, in my opinion.
> the deliberate killing of a large number of people from a particular nation or ethnic group with the aim of destroying that nation or group.
I didn't read the arguments of the people you cited because you didn't bother to link to them, but looking at their credentials, I'm not sure either of them are in a position to solidly refute the scores of footage and testimony to the contrary. The only people I've seen defending this have some kind of personal attachment to Israel, but come on, man, you can maintain that attachment and still condemn horrific acts committed by the state when they happen.
Under your dictionary definition, would you say that the Allies in WWII committed genocide of Germans and Japanese? If you say "yes", then I suppose you're entitled to your interpretation, and I know exactly how seriously to take it. If you say "no" then perhaps you could explain what is the difference between WWII (where proportionally far more civilians were killed) and the current Gaza war.
> you can maintain that attachment and still condemn horrific acts committed by the state when they happen
I agree. And one can refrain from condemning horrific acts that one is not certain happened.
I'm happy to continue the discussion if you like, but all I'm really after is your answer to this simple question: do you see defence of true genocide as equally bad as false accusation of genocide? I would be content with a simple yes or no. Or does the question make you uncomfortable somehow?
>John W. Spencer is a retired United States Army officer, researcher of urban warfare, and author. [1]
lmao, if I find two people from the military or law who deny the holocaust, then will you deny it too? Actually, maybe you shouldn't answer that.
[0] https://en.wikipedia.org/wiki/Natasha_Hausdorff
[1] https://en.wikipedia.org/wiki/John_Spencer_(military_officer...
In any case, this is really going off topic. All I am interested in is in voidhorse's answer to my simple question. That doesn't require retreading many of the the dark corners of human history.
I.e., the following is, I believe, a reasonable argument:
"I should have a right to live in this general patch of land, since my grand-parents lived here. Maybe my parents moved away and I was born somewhere else, but they still had a right to live here and I should have it too. I may have to buy some land to have this right, I'm not saying I should be given land - but I should be allowed to do so. Additionally, it matters that my grand-parents were not invaders to this land. Their parents and grand-parents had also lived here, and so on for many generations."
This doesn't imply genetic heritage necessarily - cultural heritage and the notions of parents are not necessarily genetic. I might have ~0% of the specific DNA of some great-great-grand-parent (or even 0% of my parents' DNA, if I am adopted) - but I'm still their descendant. Now, how far you want to stretch this is very much debatable.
My claim is that this is factually incorrect by any stretch of the imagination, as soon as we recognize that the modern-day Palestinians and the modern-day Jewish people are just as much descendants of the ancient Israelites. Just because their language, culture, and religion have diverged, there is nothing that ties one group more to that land than the other (if anything, those that had left have a lesser tie than those that stayed, even if the culture of those that stayed diverged). So the claim of descent and continuity with the ancient kingdom of Israel, the 2-3000 year old history, is entirely irrelevant.
I responded to the most plausible interpretation of what the 2-3000 years of history could have to do with the previous poster being wrong about Israel occupying the lands of the Palestinian people.
And again, as to the claim: I'm basically saying that the 2-3000 years of history don't, in fact, justify the occupation - so just forgetting about them and focusing on what is actually happening today (a population is being kept in an occupied pseudo-state that they aren't allowed to leave) is enough to understand the whole situation, and who is in the right and who is in the wrong. So the 2-3000 years of history are irrelevant, because they don't overturn the easily visible conclusion you would draw.
Of course, in every conflict, the history is interesting and enlightening in some ways. But, unless the history changes the light in which you view the current conflict, it's irrelevant to the question of "who is the oppressor and who is the oppressed?".
You're purposefully misinterpreting multiple comments. The "That was in response to the argument that I believe the GP was making by bringing into discussion this timeline" comment shouldn't have even been necessary, at that point you'd already purposefully misinterpreted a very clear comment, but even if that had been a mistake on your part, simiones clearly explained the conversation to you, and yet you purposefully misinterpreted it again.
If you're going to keep up this tact, my challenge to you for your next comment to me is to find a way to convince me to waste even more time replying to you. I only wanted at least someone to put a foot down and point out what you're doing. Something like pedantry, but combined with bad faith interpretations, and even more annoying for it.
pbiggar made a claim, now deleted (https://news.ycombinator.com/item?id=44317597)
zaphar says: "it ignores nearly 2-3000 years of history" (https://news.ycombinator.com/item?id=44317639)
simiones says: "The 2-3000 years of history are entirely and wholly irrelevant", and also makes some suggestion that I believed could indicate that genetic heritage was what determines which people should live where.
However, he subsequently clarified in https://news.ycombinator.com/item?id=44318095 that he was not making that claim. Yet in doing so claimed that "This doesn't imply genetic heritage necessarily - cultural heritage and the notions of parents are not necessarily genetic", drawing on the notion of "culture". Now, cultural heritage very specifically implies that history is relevant, because it's something passed down over centuries.
I then challenged him that his invocation of cultural heritage was in opposition to his earlier claim that "The 2-3000 years of history are entirely and wholly irrelevant" (https://news.ycombinator.com/item?id=44318122) to which he responded that "That was in response to the argument that I believe the GP was making" (https://news.ycombinator.com/item?id=44318317), but that's a complete presumption. The GP hadn't presented any specific argument, merely factually pointed out that some long stretch of history was missing from the analysis of pbiggar, so I asked simiones if he was responding to an argument not actually made by zaphar. Furthermore, I reiterated that simiones seemed to have defeated his own claim that "The 2-3000 years of history are entirely and wholly irrelevant".
This is where the original discussion ends, and you entered the thread. I see you making a number of unsubstantiated accusations of bad faith and trolling, but not actually engaging in the discussion of the topic at hand.
So, I have presented here a summary of a thread that highlights my process of rational enquiry. I don't see here what could be taken as bad faith or trolling. Maybe you can explain further? Or perhaps maybe you can you engage with the topic at hand? I would be willing to (though it goes rather far off the original topic).
May your endless paragraphs continue to alleviate the suffering of the Palestinians experiencing genocide, or the poor Israelis who are sad because they have to do a genocide because whatever ethnostatist reason, or whatever it is you believe - from your post history, I'd guess Israel Enjoyer, but from the threads here it's anyone's guess. The benefit of being a smug Socratic type engaged in pedantry is you can never be accused of having the wrong values, since from initial appearances, you have none.
And yet you replied, curious. In any case, that's OK, since I wasn't responding in order to meet your challenge.
It's curious that you accuse me of having "no values" and of being a "Socratic type". I assumed that, on Hacker News, a forum reputed for its willingness to engage intellectually, a simple challenge to someone's argument would receive a simple response. I assumed that rational debate, free of emotive diversions, was welcomed here. Why would "my values" be relevant? Surely establishing a rational dialogue is what's important on Hacker News. This isn't Reddit, where the standard of dialogue is typically much lower.
simiones could have said "oh yes, you're right, the last 2-3,000 years of history are relevant". Or he could have continued by providing more rationale that they're not. Yet neither he nor anyone else has responded to my observation, instead I just received comments targeted personally at me.
It makes me wonder whether one side of this debate actually has substance to back up its beliefs and actions.
Of course, you have no obligation to respond. If you do respond, I would appreciate it if you would make it about substantive, rational arguments, not personal comments.
I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?
It's very hard for me to take anyone seriously who doesn't speak out against the genocide. They're usually arguing about imaginary problems.
("if the will exists" in the article puts the blame for the situation on one side, which is inacceptable)
(Not a "gotcha". I really want to know.)
I don't know rationalism too well but I think it was a historical philosophical movement asserting you could derive knowledge by reasoning from axioms rather than observation.
The primary difference here is that rationality mostly teaches "use your reason to guide what to observe and how to react to observations" rather than doing away with observations altogether; it's basically an action loop alternating between observation and belief propagation.
A prototypical/mathematical example of a pure LessWrong-type "rational" reasoner is Hutter's AIXI (a definition of the "optimal" next step given an input tape and a goal), though it has certain known problems of self-referentiality. Though of course reasoning in this way does not work for humans; a large part of the Sequences is attempts to port mathematically correct reasoning to human cognition.
You can kind of read it as a continuation of early-2000s internet atheism: instead of defining correct reasoning by enumerating incorrect logic, ie. "fallacies", it attempts to construct it positively, by describing what to do rather than just what not to do.