Ever since the decline of the Quakers, it has been rare to see important companies governed by a spiritual or philosophical outlook. Boardrooms and executive suites have tended to be ideology-free, run on a blend of the pragmatic, the opportunistic and the selfish. But that was before effective altruism.
Often summarised as “doing the most good you can”, this neo-utilitarian, philanthropy-oriented worldview has spread across elite places of study, the salons of the super-rich, and Silicon Valley — and popped up in some of the biggest corporate stories of the past year.
This autumn, Sam Bankman-Fried was tried and convicted of fraud for diverting client funds from his FTX crypto exchange to his trading company Alameda. Weeks later, the tech industry was riveted by the ousting and sudden return of Sam Altman as chief executive of OpenAI, the creator of ChatGPT. The companies had something more in common than being at the frontier of tech: key decision makers in both subscribed to “EA”.
Most of FTX/Alameda’s leadership were devoted effective altruists. “It was extremely serious to them . . . it was what drew them together in the first place,” says Michael Lewis, chronicler of Bankman-Fried’s rise and fall. The companies were major donors to EA causes. Meanwhile, OpenAI’s boardroom drama turned on whether commercial or safety considerations should set the pace of development. The risk that a rogue AI poses to humanity’s existence is something the EA movement has been increasingly exercised by.


To find a set of philosophical ideas at the top of two of the world’s most important tech businesses is surprising enough. Even more puzzling is how quickly effective altruism rose to prominence — it is barely a decade since a couple of young philosophers at the University of Oxford invented the term and started proselytising its arguments.
There are now hundreds of local EA groups around the world, many at top universities. Thousands of users regularly debate EA on the movement’s online forum (the site branded with the EA logo of a lightbulb with a heart-shaped filament) and join its conferences. Hundreds of millions of dollars in philanthropic funding are channelled by organisations committed to the theory every year. EA “is quickly becoming the ideology of choice for Silicon Valley billionaires”, one sceptical academic philosopher complained to me. And it attracts many bright young people. “They’re all effective altruists,” a colleague whose children are recent graduates remarks of their generation’s ideological orientation.
We may have reached an Icarus moment in EA’s short history. Part of the ecosystem tied itself very closely to the FTX moneymaking machine, including William MacAskill, one of EA’s founders and perhaps its single most influential individual. “MacAskill was all over Sam’s life,” Lewis tells me. (MacAskill’s office declined interview requests.) FTX’s collapse shook the movement’s confidence and pulled a financial rug from under its feet.
Whatever its future, the story of EA raises major puzzles. Why did the effective altruism movement appear in the time and place it did? How could it reach pinnacles of the 21st-century technology business? And, above all, is EA’s success a good or a bad thing for the world?
How the ideas behind EA’s thicket of slickly presented organisations could catch on so quickly feels all the more astonishing for someone with my particular experience. The intellectual environment from which EA emerged used to be my environment. Before joining the FT in late 2008, I had taught moral philosophy at Harvard and Wharton, done a PhD in the intersection between philosophy and economics, and first studied ethics at Oxford. Yet I did not see EA coming at all. In my decade and half of being exposed to academic moral philosophy, nobody I knew would have predicted that any philosophical outlook, let alone this one, would take off in such a spectacular way.
A potted history of effective altruism could go like this.
Once upon a time there was utilitarianism, or doing that which brings “the greatest happiness to the greatest number”: the arithmetically appealing moral theory of Jeremy Bentham that became a rescue buoy for Victorians who lost their Christian faith. In the 20th century, utilitarian thinking became ever more sophisticated in response to criticism — in particular over its assumption that all outcomes can be weighed against each other, and its struggle to make sense of the moral importance of rights and of personal attachments.
But its core remains that morality requires aiming for the greatest expected value of individual choices’ consequences for wellbeing and suffering, impartially measured. Utilitarianism was entrenched in a crude way in cost-benefit analysis of economic policy, but progressively lost the favour of philosophers, who considered it too freighted with implausible implications.
Then came Peter Singer. In a famous 1972 article, the Australian philosopher argued that not giving money to save lives in poor countries is morally equivalent to not saving a child drowning in a shallow pond just to avoid getting one’s shoes muddy. While you could arrive at this conclusion through other routes than utilitarianism, that tradition is particularly fitting for Singer’s calls to widen our moral circle — to treat suffering and wellbeing everywhere as equally important (including in animals), and to see morality as the requirement to cause more wellbeing and less suffering in total.
The implications are stark. Any personal luxury, when excess income could combat hunger or poverty, would stand condemned, as would industrial farming.
In my time in academia, Singer’s philosophical rigour was respected but also treated as a bit of a reductio ad absurdum. If your principles entail such all-consuming moral demands, then the principles are in need of revising, was the common view. But a generation later, the seed planted by Singer found extraordinarily fertile soil.


The founders of EA, MacAskill and Toby Ord, have both credited Singer’s article with their moral awakening in the mid-2000s. “Studying that paper and others, I wrote an essay for my BPhil at Oxford — ‘Ought I to forgo some luxury’,” Ord told me, which “forced me to think seriously about our moral situation with regard to world poverty”.
Within a few years, Ord and MacAskill had founded Giving What We Can, an organisation collecting pledges to donate 10 per cent of one’s income to the charities that evidence could show to have the greatest per-dollar impact on lives saved or improved. Since then, the organisation says nearly 10,000 donors have signed on for several billion in accumulated pledges. (Singer was one of the first.) Next came a career-advice service called 80,000 Hours, encouraging students to choose the career that would do the most good — which in many cases would mean finding ways to “earn to give” to effective charities, rather than directly devoting oneself to, say, medicine, social work or development. In both cases the motivation to reach for the maximum good one could do was central.
Out of this grew the Centre for Effective Altruism, a hub (later renamed Effective Ventures) through which fundraising was channelled to EA projects, as well as to building the community itself, running everything from recruitment events at universities to conferences and an online forum. In the financial year to mid-2022, Effective Ventures’ income was £140mn.
The EA ecosystem has other roots besides the Oxford philosophy department. US philanthropist entrepreneurs around the same time adopted a more business-like focus on finding the most effective charities to support, an effort associated with such groups as GiveWell and Open Philanthropy, and donors including Facebook co-founder Dustin Moskovitz. These efforts, too, proclaim a desire to “do as much good as possible”. But while they share the empirical hardheadedness of the group that developed at Oxford, they seem less invested in the philosophical framework.
This matters, because there are two ways to characterise EA. One is modest: it says that if you are going to donate to charity, pay attention to what you fund and choose the most effective charities. Ord likens it to cost-effectiveness analysis: “EA is not about how to live a good life, but about how to do the altruism part” of it best. (Giving What You Can had early links with economists pioneering randomised trials for development policy.)
That is hard to argue with: why not want your charity dollars to do the most good possible? But even this modest version leads to some uncomfortable implications: it is wrong to volunteer your time for a cause you can better advance by “earning to give” to it; it is wrong to choose an “inefficient” cause — say, research into an expensive-to-treat disease that killed a loved one. This is what EA celebrates as “cause prioritisation”.
The philosophically non-committed can content themselves with this. But stronger commitments inspire the fervour seen among EA’s young adherents — many of whom testify to how EA changed their life and gave it purpose.
For if you take Singer-type ideas seriously, the modest version is not where you stop. If you find cause prioritisation — maximise the good from each dollar you give — morally convincing, how can you not apply it to the amount you should give in the first place? Next, how can you not apply it to your career choices and how much money you could make to give away — as EA’s 80,000 Hours encourages?
Finally, how can you not ask whether you should really focus on the poor in the world, or farmed animals’ suffering, if there is even a small chance that an asteroid or AI could deny trillions of potential future lives their existence, and you could devote your resources to preventing that? Such “longtermism” is increasingly being adopted by the EA community and its Silicon Valley friends.
There is a reason, I think, why Oxford should have been the place where such ideas have blossomed. Two of its philosophers, John Broome and the late Derek Parfit, spent their careers thinking about the moral need to account impersonally for the wellbeing and suffering of distant or future lives. Their rigorous arguments provided springboards for the sort of reasoning EA is committed to. Not only that: they personally trained the movement’s intellectual leaders. Parfit supervised Ord’s doctorate, Broome both Ord’s and MacAskill’s. Broome admits to being “surprised by how effective [EA] is. It’s impressive how attractive it has been to the young.”
“The students are wild about it,” says Alan Strudler, an old Wharton colleague of mine who still teaches ethics there. But why? “The psychological reason is easy: they are desperate for a way to do good but they don’t know how,” Strudler thinks.
That does not explain why EA took off when it did. Many bring up the internet’s ability to make ideas go viral. But I think it has as much to do with the challenges afflicting late millennials and Gen-Zers. This demographic came of age during the global financial crisis or the subsequent austerity and crisis periods. And they were children or young adults on the arrival of social media and nonstop connectivity. (Facebook was started in 2004, the first iPhone launched in 2007.) A plausible case has been made that this has resulted in greater loneliness and social anxiety. And they are the first to grow up facing the undeniability of possibly devastating climate change.
Is it surprising that a generation with poor prospects and good reasons to be disenchanted with the world their elders have bequeathed them should disproportionately find meaning in altruism? It would be more surprising if they did not.
Strudler told me that students are also attracted to the seeming purity of EA’s technical apparatus, the same as utilitarianism’s. The rational, calculating search for what produces the most good promises “a mechanical and what appears to be an exact way of explaining how to be good in your life . . . It appeals to techies of every sort.”
The reduction of moral questions to mere technical problems is surely one reason that EA spread in two particularly moneyed techie communities: Silicon Valley and quantitative finance. Perhaps it was a foregone conclusion that it would become the confession of choice where those two industries converge: crypto, of which Sam Bankman-Fried was once king.
If young people want to do good, what’s not to like? “It’s good to see that there are so many altruists,” says Broome. But, he adds: “I think these people are naive . . . The focus on philanthropy . . . gives cover to wealth and the increasing inequality there is in the world . . . Where the efforts of these altruists should be directed is towards ensuring that governments behave properly. They are not thinking enough about the political structure that underlies their ability to give money away.”
As another philosophy professor put it to me, EA suggests to bright undergraduates that “the world is a problem they can solve, not through the difficult work of politics . . . but simply by applying an easy algorithm”. For Strudler, it reflects “a failure of imagination. [EA] is a substitute for hard moral judgment, but it’s a substitute that doesn’t work.”
One Oxford philosopher says: “The success of EA cannot be explained without exploring its relationship to wealthy people.” That includes the funding of multiple satellite institutions at Oxford, such as the Global Priorities Institute, whose original mission statement included “to gain widespread acceptance of the core tenets of EA throughout academia”.
The relationship with the university is not unproblematic. One academic told me of unhappiness in the economics department with the quality of some economics researchers hired outside the department but benefiting from the Oxford brand. Another said the philosophy department tolerates these institutions because they help raise philosophy’s “impact score” in research fund applications.
EA’s relationship with very rich people comes with risks. The collapse of FTX caused a crisis of trust throughout the movement. “An awful lot of people felt betrayed,” says Shakeel Hashim, head of communications at the Centre for Effective Altruism.
Then there were the financial consequences. In the year to June 2022, Effective Ventures lost £5.2mn on cryptocurrencies, according to its annual accounts, and just this month announced it had to repay the FTX bankruptcy estate $27mn and would split into separate legal entities for each EA project under its umbrella. The foundation is also subject to an inquiry by the Charity Commission following the FTX bankruptcy. There is no indication of wrongdoing.
But if I were an effective altruist, what would worry me most is that EA has failed to produce “the most good” in the two public cases where it should have made the biggest difference.
The presence of effective altruists on OpenAI’s board was natural insofar as “longtermism” and prospective future catastrophes, such as rogue AI, are taking up ever more of EA’s attention. But one person familiar with the OpenAI boardroom conflict says that EA ideas did not play any role. If so, that must surely count as a huge missed opportunity to avert potentially devastating future harm, which by EA lights is as morally bad as causing it.
And EA ideas clearly did not discourage the fraud at FTX. It is not implausible to think Bankman-Fried knew he was breaking the law but concluded from his trading experience that he had good enough odds to make the money back many times over, enough to make FTX clients whole and unaware of anything amiss, with more left over for EA causes. In other words, he may have thought the expected value of the fraud was distinctly higher than that of honesty. And, if this was the case, who are we to say he was not correct — just unlucky? (Investors are currently trading FTX bankruptcy claims at 70 per cent of face value, implying a high degree of confidence of recovering a lot of the missing money.)
Everyone in the EA community is adamant that Bankman-Fried’s conduct was nonetheless wrong. Ord contrasts Bankman-Fried — “a classical utilitarian, number-crunching kind of person” — with most effective altruists, “guided by conventional wisdom tempered by an eye to the numbers”.
But that begs the question of what arguments, within effective altruism, could condemn what Bankman-Fried did (assuming it was reasonable to see the odds as above). When I put this to him, Ord accepted I had a point. There must be constraints, he insisted, but admitted they are “not built into the philosophy”. This implies, it seems to me, that to achieve the most good we can do, we should not take EA too seriously.
Martin Sandbu is the FT’s European economics commentator. He also writes Free Lunch, the FT’s weekly newsletter on the global economic policy debate
Find out about our latest stories first — follow @FTWeekend on X and Instagram, and subscribe to our podcast Life and Art wherever you listen
Read the full article here


