It’s not mainly financial capital that must be better allocated for Altruism to be Effective: it’s social capital.
The fundamental problem with Effective Altruism
Despite all its trumpeting about questioning fundamental assumptions, EA culture fails to question even commonplace ones.
For example:
Do UFOs demonstrate physics that mainstream science doesn’t know about?
If Nikola Tesla invented free wireless electricity 120 years ago, why don’t we have it today?
If Gandhi won a war without violence, why don’t we fight all wars that way?
For some reason, questions like these are BORING to EAs.
I know, because I tried asking them! I submitted a piece about them to their Critique & Red Teaming Contest (The Cost-Benefit of Weirdness), and it received 10 votes. Not 10 upvotes — 10 votes.
EAs’ level of disdain for heterodoxy I’ve only seen one other place: Everywhere.
EAs are supposed to be the rising stars of our intellectual firmament! Why is their Overton window so… plebeian?
Then I saw it.
But first, a little backstory.
Psychologically adjacent bullshit
In one of my favorite Twitter threads of all time, Venkatesh Rao pointed out the hard thing about doing hard things is wanting it for real, as opposed to wanting "psychologically adjacent bullshit."

You have to want the thing because you value the thing, not because of some kind of ego gratification the thing would give you. There are many ways to get ego gratification, so if an easier way to get the same gratification comes along, you may mysteriously abandon the goal and do that instead.
I fell victim to this myself. When I was in my early 20s, spiritual work was my #1 priority. But my motivation for spiritual work was impure — I wanted to get close to God, but underneath that I was desperate for something far more mundane: self-respect. I wanted some psychologically adjacent bullshit — not God, but “healing for my lack of self-respect.”
What happened? I stayed in a toxic relationship with a highly-respected woman. My ego said: “If she respects me, and she deserves respect, then I deserve respect.” And I feared the converse: “If we break up, it’s because I don’t deserve respect.” And then I basically abandoned my “top priority” spiritual work to keep her around. This galls me to admit even now.
A shortcut to my psychologically adjacent bullshit-goal came along, and that “top priority” mysteriously fell by the wayside.
The lesson is this: If you attempt a grand undertaking for psychologically adjacent bullshit (PAB) reasons, then when a shortcut to it comes along, you might abandon your grand undertaking — and not even know why.
The human will seems to follow paths of least resistance, like electricity, or water flowing downhill.
When you want something for its own sake, it carves a path of least resistance that leads you to it:
Pure motives: If you want to read Moby Dick because you want to read Moby Dick, then Finnegan’s Wake just won’t do. You won’t stop until you get Moby Dick, no matter what alternatives present themselves.
PAB: But if you want to read Moby Dick because you want to feel smart and cultured, then Finnegan’s Wake could do just fine, and you’ll gravitate toward whatever is easier.
Bullshit-jacked
Effective Altruists don’t really care about effectiveness nor altruism. They care about social status — it’s their psychologically adjacent bullshit, the secret need for which they’d abandon all their lofty principles.
And the purveyors of social status took advantage.
Effective Altruism appeals to people who genuinely possess the intellectual horsepower to obliterate today’s rickety-ass scientific paradigms.
But since Ivermectin does work against COVID, UFOs are real, and free wireless electricity is over 100 years old… the maintainers of our Overton window found a way to make nerds feel smart and powerful by refusing to question anything of import.
They got bullshit-jacked.
EA became a fake intellectual movement, of would-be-real intellectuals who instead took the easy way to fame and fortune — a release valve for all the geopolitically-inconvenient intellectualism in the world.
EAs got duped into building social capital for the same pitiable Overton window the ailing New York Times wants us in.
That’s why the Venn diagram looks like this:
A vision for reviving EA
In general, the ideas EA rejects a priori are the same ones mainstream institutions treat the same way. This is by design.
But it seems self-evident that only breakthrough divergences from mainstream thinking could hope to yield breakthrough effectiveness by comparison. That means asking questions like, “Are there any scientific paradigm shifts hiding in plain sight?”
This requires social capital, because any sincere attempt at this is downright embarrassing. We have names for “paradigm shifts hiding in plain sight,” and none of them are flattering: conspiracy theory, pseudoscience, crackpot.
To reduce the social capital expended in considering possibilities that tend to fall under those umbrellas, it might make sense to rebrand Effective Altruism as Venture Altruism.
Modeled after the Venture Capital approach to maximizing return-on-investment, Venture Altruism would take an approach where the benefits of being right 1 time, outweigh the cost of being wrong 99 times.
This solves several problems at once:
It gives VAs an altruistic excuse for taking things seriously that “sound crazy” (“I don’t believe it personally, but it’s better to leave no stone unturned”)
It gives VAs an financial excuse for taking things seriously that “sound crazy” (“I don’t believe it personally, but it’s a great risk-reward if it bears fruit, and odds are one of these things will work out”)
The Venture Altruism frame reduces the social capital required to actually follow through on Effective Altruism’s original goals, so that the same thing doesn’t happen again.
How VA works
Vaguely right, not precisely wrong — VA looks like a lot less energy going into being precisely right about mainstream ideas, and a lot more energy spent being vaguely right about high-leverage ideas: paradigm shifts hiding in plain sight.
Demographically agnostic — VA seriously considers ideas from different social demographics — because doing the most good is more important than loyalty to a particular social or political identity.
Scientific taboo agnostic — VA seriously considers ideas that “sound crazy,” because doing the most good is more important than appearing sane to people who don’t care as much about altruism as you do.
Humiliation is your friend — VA abhors social status and the dangers it poses to sincere intent. VA prioritizes information from low-status people and social groups, because by definition the most under-appreciated information is ignored or reviled by current institutions.


The X-Risk of ignoring Galileo
When it comes to sniffing out paradigm shifts hiding in plain sight, the question isn't only “is there proof” — it's also “If there's proof, would they tell you?”
EAs tend to think "Yes. 100%. End of story."
The “no” option is an X-risk worth mitigating, but it takes social capital to do so.
EAs have the knowledge. They have the money. But do they care enough?
Nail on the head like always