What the heck is going on with e/a, e/acc, and AI?

I’ve been pretty confused about seeing “e/acc” suddenly everywhere TODO

So I figured, let’s try to make sense of it all through writing.

First of all e/acc is an obviously reference to the effective altruism (e/a) movement.

At its heart, effective altruism is about smart philanthropy – maximizing the “ROI” on charitable spending, i.e. doing the most good with a given amount of money.

Effective accelerationism on the other hand, is a philosophy that advocates for the rapid advancement of artificial intelligence technologies.

But somehow it became team e/acc vs. team e/a.

How did that happen?

Effective altruism was for a while primarily about evidence-based ways to reduce global poverty & suffering. You should donate to donate to deworming, vaccines, malaria nets since these initiatives do the most good with your donations.

But then the philosophy quickly broadened and became about how to use evidence and reasoning to determine the most effective ways to improve the world.

And in short, team e/a applied “rational reasoning” to determine that the most effective ways to improve the world is by reducing the existential risk posed by AI.

The math is: “if humanity lasts another 50M years, & current trends hold, the total number of humans who will ever live is 3 quadrillion. The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001% can be expected to save 100 billion more lives than, preventing the genocide of 1 billion people. With this math, any tail risk extinction level event trumps anything actually happening in the world today.”

Through this logic the e/a movements main cause became to slow down or stop AI developments to minimize this extinction risk. Instead of donating to deworming, vaccines, malaria nets, you should donate to initiatives that minimize the risk posed by AI.

And that is the exact polar opposite of effective accelerationism.

Now it’s not hard at all to see why the e/a logic is flawed here. The numbers are completely made up.

No one knows if p(doom) (the probability of the worst possible outcome of AGI) equals 0.00000000000000001% or 0.00000000000000000000000000000000001%.

Every single estimate of p(doom) is 100% based on vibes.

And yet, you need a specific value to make your reasoning appear rational and evidence based.

The intentions might be good but the whole argument is nonsense nevertheless.

Comparing the good you do by donating to A vs. B is nonsense when the numbers used for the comparision are not based on real-world numbers.

Now these kind of problems have plagued the effective altruism community forever.

On the surface level, it all seems straight forward and logical. You put a number on how much good you do by donating to certain causes, then you focus on the ones that do the most good.

One core problem is that coming up with and comparing these numbers often requires flimsy bookkeeping.

For example, William MacAskill, co-creator of the effective altruism movement, argues in his book Doing Good Better that deworming is more effective than donating textbooks.

The logic goes like this:

  • Researchers tested the effectiveness of donating textbooks and found that test scores did not improve.
  • Researchers also tested the effectiveness of deworming and found it works incredibly well in reducing absenteeism.

Hence you should donate to deworming not textbook donation initiatives.

I hope you noticed that this comparision is of course nonsense since it compares apples to oranges.

To draw any conclusions we would need to know how effective deworming is in terms of improving test scores.

And interestingly, the exact same study McAskill cites has this data too.

The result?

The researchers “do not find evidence that deworming improved academic test scores.”

Huh?

It would be one thing if the data was not available. But citing a paper that allows a direct comparision of results but then choosing to compare apples to oranges because this better supports your story does not put a good light on the movement.

McAskill also argues that certain initiatives are 100 times more effective even though the numbers used are rough estimates that can easily be off by a factor of 100.

Another core problem is that the amount of good you do can be quantified in infinitely different ways.

Is improving test scores really a good measure of doing good?

How do you weigh saving a humans live vs. an animals live? What about saving a dogs live vs. an ants live?

How do you weigh funding research vs. funding the application of existing solutions?

Or what about saving a human live today vs. potentially saving a thousand lives in the future?

You can easily make the case that saving a human live today is far more important since every single invention or discovery made in our lifetime will have huge ripple effects for the quadrillion of humans that will live in the future. And every single human that lives now increases the odds of breakthrough discoveries. The next Einstein might be dying of Malaria.

And the problems do not end here.

Sam Bankman-Fried was one of the most prominent faces of the effective altuism (e/a) movement. He told everyone that the only reason he was trying to make as much money as possible is to give it all away in a way that benefits humanity the most.

This story definitely helped his public image a lot.

He’s giving Tom Brady $20M for 3 days of work? Spending silly money on an Superbowl ad? Getting crazy rich through something that is effectively an unregulated online casino?

Sure that might seem silly but you’re missing the bigger picture. He’s just doing it so he can earn more, so he can give more away, and help humanity more.

He was also heavily endorsed by William MacAskill who vouched for him and opened doors.

In fact, it was MacAskill who reportedly convinced Bankman-Fried while he was still a MIT student to maximise his impact by taking a high-paying finance job and giving his money away.

That’s e/a logic par excellence.

Should you A) work for a charity or B) take a job where you can maximize your earnings so you can donate a lot of money to charities?

Effective altruists argue that if you apply cold hard logic, the answer is obviously B.

By taking on a high-paying job and donating 50% of your lifetime earnings, you can pay for two nonprofit workers in your place.

That’s how you maximize your positive social impact. Or at least, so the e/a argument goes.

Here also the logic is flawed.

If I start working at a charity, do I really only contribute $40k/year worth of labor?

If my salary elsewhere would be $200k/year, am I wrong to argue that I also contribute $200k/year at the charity?

It’s just that I take a pay cut since I believe in the mission. My work is not only worth $40k because that’s what I’m getting paid.

Yes, if you think about humans as cogs in a machine that can be replaced at will, the e/a logic might hold.

But in my experience, people can have 10x the impact despite working in the same role at the same organization. Just think about high school teachers for an easy example.

Now last but not least, what does the fact that Sam Bankman-Fried committed fraud say about effective altruism?

I can definitely see how pushing people to earn as much as possible so they can donate more to charities can definitely lead to some questionable choices.

Using e/a logic unethical stuff can be justified if you’re contributing sufficient donations. You can justify almost any action as long as you’re doing more good than bad, right?

Leaving all questions of moral aside, once again, the core problem is that there is no canonical definition of “good” or “bad” that would allow for a straightforward comparision.

To come full circle here, Sam Bankman-Fried was team e/a and not just in the initial meaning of the term but in the anti-e/acc sense.

He invested $500 million into Anthropic. The founders of Anthropic left OpenAI to make a safe AI company, unhappy about how saftey concerns were handled.

For the sake of argument, let’s say Anthropic is team e/a, OpenAI is team e/acc.

Both have the stated goal of developing AI that benefits all of humanity.

Anthropic approaches this by restricting access to a small select group of users.

OpenAI on the other hand allows anyone to use their models.

This nicely illustrates what it all boils down to eventually: power.

“Let’s seize the means of production. Then we can distribute and organize them more fairly.”

Yeah, we all know how that one played out.

Now let me be clear that I still have no clear understanding what e/acc is. I don’t think it’s a real thing yet, you know with books written and professors thinking full-time on the topic.

To me it seems like the initial spark of a new counter culture similar to the one started in the 60’s in response to the Vietnam War. Currently it’s purely a response to e/a-fueled AI doomerism.

At the same time I don’t think anyone seriously disagrees that AI developments come with some serious risks. The probability of scenarios where some AI decides to kill all humans and has the means to do so is not zero.

But arguing that p(doom) is equal to some number you made up and hence it’s more important to engage in AI doomerism than saving people’s lives through donations or research is just plain nonsense.

The gist of it all, is that the world we live in is incredibly complex.

Attempts to explain and operate within it using simple numbers often do more harm than good.

Written on November 26, 2023

PS: If you're interested in following my journey, sign up below: