Jackipedia
Last edited: 2026-04-06 22:25:53  |  2 revisions  |  All changes

Effective Altruism

Category: Philosophy / Concepts Summary: The intellectual and social movement that applies evidence and reason to the question of how to do the most good — influential in tech, contested in practice Last updated: 2026-04-06

Overview

Effective Altruism (EA) is a philosophical and social movement that combines two commitments: using evidence and reason to identify the most effective ways to benefit others, and then acting on those conclusions rather than just holding them. It originated in academic philosophy (Peter Singer, William MacAskill) and spread through tech and finance communities in the 2010s.

EA is intellectually serious, practically ambitious, and has become increasingly controversial — particularly after the FTX collapse in 2022 and subsequent questions about whether EA’s utilitarian framework can generate rationalized justifications for harmful actions.

Core Ideas

Impartiality — A stranger’s suffering matters as much as a friend’s. A person in a distant country matters as much as someone in your city. Future people matter as much as present ones. This sounds like an obvious point until you follow it to its conclusions.

Comparative effectiveness — Some interventions to reduce suffering are orders of magnitude more effective than others. Distributing anti-malarial bed nets saves more lives per dollar than almost any other intervention available. Understanding this differential matters.

Cause prioritization — EA asks which problems are most important to work on. The criteria: scale (how many people affected?), tractability (can we actually make progress?), neglectedness (are other people already working on this?).

The most popular EA causes: - Global health and development (bed nets, deworming, cash transfers) - Animal welfare (factory farming affects billions of sentient creatures) - Long-term future / existential risk (AI safety, biosecurity, pandemic prevention)

Earning to Give

One EA idea that spread widely in tech: you don’t have to work directly on important problems to have impact. You can work in a high-earning field (finance, tech) and donate substantially to high-impact causes. This reframed the choice between “pursuing money” and “doing good” as a false dilemma.

The effective altruist calculus: a talented engineer who earns $500K/year and donates 30% can fund multiple full-time charity workers. The direct path — going to work for a nonprofit at $60K — might have less total impact.

This argument is compelling and contested. Critics note that it can become a rationalization for continued participation in industries with negative externalities, and that the donation rarely follows the initial commitment.

GiveWell and Evidence-Based Giving

GiveWell is an EA-adjacent nonprofit charity evaluator that does rigorous cost-effectiveness analysis on global health interventions. Their top charities — Against Malaria Foundation, Helen Keller International (vitamin A supplementation), GiveDirectly (direct cash transfers) — represent the state of evidence-based giving.

GiveWell’s methodology: what is the cost per life saved or per DALY (disability-adjusted life year) averted? How strong is the evidence? How much does additional funding help (is the organization funding-limited)?

This is a more rigorous approach to charitable giving than most people apply, and the underlying evidence on cost-effectiveness is genuinely striking.

Long-Termism

A significant branch of EA focused on the long-term future of humanity: if future people matter morally, and there could be trillions of future people, then the expected value of small reductions in existential risk (nuclear war, engineered pandemic, misaligned AI) is enormous.

This logic leads some EAs to prioritize AI safety, biosecurity, and governance work above immediate suffering reduction, on the grounds that the potential impact is so much larger.

Long-termism is controversial even within EA. Critics argue that it drifts into speculation about hypothetical future populations while real people suffer now; that the math is only as good as the probability estimates, which are extremely uncertain; and that it can justify almost anything under the banner of cosmic-scale expected value.

The FTX Collapse and EA’s Crisis

Sam Bankman-Fried (SBF) was one of the most prominent figures in the EA community before FTX collapsed in November 2022. He explicitly framed his work in EA terms — earn to give, accumulate capital to donate, maximize impact. He was the largest donor to the EA movement.

FTX turned out to have been using customer funds for Alameda Research’s trading positions. Billions in customer money was lost. SBF was convicted of fraud and sentenced to 25 years in prison.

The collapse raised serious questions about EA’s relationship to consequentialist ethics: if the goal is maximizing good outcomes, can that framework generate post-hoc rationalization for actions that are simply harmful? Does “galaxy-brained” utilitarian reasoning — where a clever chain of logic leads to a monstrous conclusion — represent a failure mode of EA’s intellectual framework?

The EA community’s response has been mixed: some argue FTX was an individual failure, not an EA failure; others argue the community’s culture enabled and celebrated SBF in ways that reflect genuine institutional problems.

EA in Tech

EA has been influential in Bay Area tech culture since the mid-2010s. OpenAI was founded partly by people influenced by EA and existential risk thinking. Many AI safety researchers operate within an EA framework. The rationalist community (LessWrong, Slate Star Codex) has significant overlap with EA.

The movement’s presence in tech creates a particular cultural mix: people who are intellectually serious about ethics, genuinely committed to doing good, working in an industry that is also creating the technologies EA is most worried about. The tension is real and unresolved.

Revision history

DateCommitEdit summary
2026-04-06 22:25:53d88be0d4build: auto-update 2026-04-06 22:25 UTC (127 pages)
2026-04-06 21:57:30d04fc9bcbuild: auto-update 2026-04-06 21:57 UTC (127 pages)