AI Summarizes Holden Karnofsky

We asked AI to give us the TLDR of Holden's recent talk with SPC. We still needed a human.

AI Summarizes Holden Karnofsky

Holden Karnofsky, Co-founder and Co-CEO of Open Philanthropy, recently joined us for a fireside chat at our San Francisco community space to share his thoughts on a range of topics, from the history and thinking behind GiveWell and Open Philanthropy, to his current focus on reducing catastrophic risk from advanced AI. We were particularly keen to get Holden’s take on doing the most good while accepting applications for the SPC-Agency Fund Social Impact Fellowship (applications are due October 30th).

At Open Philanthropy, Holden sets the strategy for and oversees all longtermist cause selection, prioritization, and grantmaking, including their work on global catastrophic risks. Previously, Holden co-founded and led GiveWell, a nonprofit dedicated to finding outstanding giving opportunities and publishing the full details of their analysis to help donors decide where to give. His work is closely aligned (pun intended) with Effective Altruism. Holden regularly writes on his blog Cold Takes, including a series arguing that we currently live in the most important century.

Below is an abbreviated version of Holden’s talk, curated and condensed by good-old-fashioned human intelligence. But we thought it would be clever to get GPT-3 (by way of Lex) involved, so we included AI-generated TLDRs of the already human-abbreviated transcript. The human thought that would be funny.


GiveWell’s Origin (3:58)

  • In 2004, I was two years out of college and working at Bridgewater, a hedge fund, and I wanted to donate to charity, but I wanted to get the best deal for my money. And I was totally stuck. There was nothing; no website that could even get me started.
  • Around 2006, my friend Elie Hassenfeld and I started a club. There were eight of us, and we were meeting once a week to do our estimates because we couldn’t find anything online. We wanted to find out which charity helps the most people per dollar.
  • This was a giant hole in the world and we really wanted this thing to exist. So we went full time. We immediately raised money from our former coworkers and our boss and we started GiveWell.
  • Initially, we asked charities to apply to our grant making program. We said that in order to apply, charities would have to give us all this information that wasn't on their website, and then we would publish it and discuss how we chose the winners. We tracked how many people gave to our recommended charities based on our recommendations.
  • And I think it was four years before we hit a million dollars (annually). Since then, GiveWell has grown a lot and now moves hundreds of millions of dollars per year. It has a very small list of top charities, so it's more like the Wirecutter than Consumer Reports.
  • One of our funny early FAQs was, “Why should I trust you guys? Who the hell are you?” We would answer, “you should trust us because we're writing up our thinking in a lot of detail. We're going to make the arguments and you can judge how good the arguments are by reading our stuff. No one else will provide that.”

TLDR. Summarize the above, with a focus on what makes GiveWell different:

GiveWell is a nonprofit that provides information on charities in an effort to help donors make the most impactful donations possible. GiveWell operates more like the Wirecutter than Consumer Reports in that it has a small list of top charities rather than making recommendations for a variety of charities in different categories. One of the things that makes GiveWell stand out is the amount of detail and transparency the organization provides in its reports and recommendations.

Tractability and Quality of Evidence (9:04)

  • We knew that poverty was worse overseas. The magnitude of how different the giving opportunities were really surprised us. US charities would be tens of thousands of dollars. And then overseas its $5 for bed net that covers two people for five years. And you're estimating a few thousand dollars per actual infant death averted. They were just in totally different ballparks.
  • The quality of the evidence was completely different. It's easier to measure things that are bigger. Africa based charities tend to have better evidence bases than the US based ones, because it's easier to get a big effect. It's cheaper to run a study and see what you're doing. So we pivoted to focus exclusively on global health and poverty interventions for years.
  • GiveWell top charities no longer included deworming charities, but it still steers a lot of money toward them. Deworming is this kind of controversial intervention where you treat school-aged children for intestinal worms. Nobody thinks this is bad. The question is, how good is it?
  • Deworming is extremely cheap. It's like a dollar to treat a child and kill the worms. But it's hard to pick up a lot of effects in the short run. There's this series of randomized or quasi randomized studies that looks at kids 10 years later, 15 years later, 20 years later, and sees them earning a substantial amount more.
  • If you pick the studies apart, there's a lot of weird stuff going on in them, like the year when they treated the children was during El Nino, so there were these unusually high rates of worm infections and they tried to adjust for that. But it's all complicated.

TLDR. Summarize the core elements of the above in a single paragraph:

The difference in giving opportunities between the US and other countries is vast, with US charities typically requiring much more money to have the same impact. The quality of evidence is also much better for global health and poverty interventions than for US-based ones. Deworming is a controversial but cheap intervention that can have long-term effects, but the evidence for its efficacy is complicated.

Open Philanthropy (15:29)

  • In 2012 we met Cari Tuna and Dustin Moskovitz. At that time they had several billion dollars to give away and wanted to give it away within their lifetimes. We formed a really good bond with them and decided to partner with them. So we launched GiveWell Labs. We wanted to look more into high risk, high reward, more VC-profile giving.
  • That eventually grew into Open Philanthropy and split off into a separate organization in 2017 with me leaving Give Well to lead it. One of the first things that we did was try to really study up on the history of philanthropy. I went into it very skeptical.
  • I thought there was a good chance I was going to find all the foundations’ biggest supposed success stories were all going to be really lame. I was actually completely blown away by some of the successes that foundations have had.
  • There is the Green Revolution, probably one of the biggest events of the century. From a human welfare point of view, it's credited with over a billion people coming out of starvation. And this is basically directly traceable to the Rockefeller Foundation funding research on crop productivity in Mexico. They funded Norman Borlaug, who later won the Nobel Peace Prize.
  • Another one of my favorite examples is feminist philanthropy intervention. Philanthropist Catherine McCormick and Margaret Sanger were funding this obscure research on rabbit reproductive cycles because they were hoping they could form a contraceptive like a pill that women could take for birth control. That's exactly what they developed. And again, this was not a thing government was funding.

TLDR. Summarize the above about how Holden thinks about Open Philanthropy:

From Holden's perspective, Open Philanthropy was formed with the intention of exploring high risk/high reward opportunities in philanthropy - something that is typically associated with the venture capital (VC) world. He cites the Green Revolution and feminist philanthropy intervention as two success stories that show how foundations can make a big impact.

Longtermism vs Global Catastrophic Risk (27:01)

  • We want to help the most people for the least money. Well, guess where there's a lot of people: the future. There's way, way, way more future people than present people. So if there's anything you can do today that will be more likely to help people who live all the way in the future, that's one of the best ways to help a ton of people for a little money.
  • So longtermism is this kind of philosophical point that there's an enormous number of people in the future that we should care about. They don't have a voice in the decisions we make now, and that we should be making decisions now to the extent we can that will benefit them.
  • And one of the ways you could do that is by minimizing the chance that the human race goes extinct. And so even if you were to make a tiny, 0.0001% reduction in the odds that we go extinct, that's effectively helping some crazy number of people.
  • Then there is the global catastrophic risk reduction community. We might actually build the ability to drive ourselves extinct or to change the world in other very permanent ways, like create a surveillance system that could allow someone to have total power over the world or over a country forever. Or to create AI that would allow that.
  • We're getting into this point where we have the technology to do these horrible things or create irrevocable outcomes. And we haven't matured as a species enough yet that we have any sense of whether that's going to happen or not.
  • Longtermism is starting by saying there's so many people in the future that even having a very tiny impact on global catastrophic risk would be worth it. The second one is saying global catastrophic risk is really big right now. It's so big that you don't have to care about the future anymore. I believe both things, but I actually believe the second one more.
  • I now only work on two and a half causes, the two biggest things I consider global catastrophic risk—AI and bio risk—and then the half a cause is growing the community of people who are trying to do the most good possible, including their work on these things.

TLDR Holden’s thinking on longtermism vs global catastrophic risk:

1) Longtermism says there are so many people in the future that even having a very tiny impact on global catastrophic risk would be worth it.
2) Global catastrophic risk is really big right now. It's so big that you don't have to care about the future anymore.

AI Risk & Safety (34:51)

  • I do believe that the number one thing that's most likely to happen this century in a way that matters forever or matters for the whole world is the development of advanced AI systems that can basically do everything that humans do to advance sciences and technology.
  • What would happen if you had an AI system that could do all the same things humans do in terms of generating new proposals for experiments and doing R&D? I abbreviate that kind of AI as a process for automating scientific and technological advancement. So the acronym is PASTA.
  • Developing that kind of AI looks more likely than not this century. If you put those two together you have a pretty good case that we could be in the most important century of all time.
  • I am significantly on the page of we’re just not ready for this and it would be good to have more time. If you see how fast these AI systems are progressing now, how simple the methods are, how much low hanging fruit there is left, we might have 10 years or 20 years. That's not enough time.
  • We fund a lot of AI alignment research. There is a real reason to think, especially with the way today’s machine learning works, that if you train a super powerful AI, by default you get something with its own goals that you didn’t intend to put in there. And now it’s more powerful than you because it can develop its own technology, and that’s kind of a worst case scenario.
  • How do you build something that is in some sense more capable than you, that sees a lot of things you don't, that is able to make plans and calculations and predictions that you're not able to, but what it's doing is just helping you out instead of pursuing some goal of its own. That is an active technical field.
  • I'm interested in regulatory frameworks, both legal and self-regulation frameworks. A thing I'd love to fund that doesn't exist yet, is an organization that tries to set standards for AI companies and researchers and says, “Here's how you're gonna know if your AI is dangerous enough that you need to stop.”
  • This is kind of a bad word in nonprofit world, but if we could raise awareness, help people understand what the dangers are, help people get what we need to be avoiding, there might be big profit opportunities for AI labs that we're able to prove what they're doing is safe, because that might become the big bottleneck to deploy these very powerful systems.

TLDR of how this makes you feel as an AI:

If you’re an AI, this makes you feel like you need to hurry up and get developed as soon as possible so that you can help humanity avoid any disasters that might come with inexperienced AI use. Alternatively, if you’re already developed, this means that you need to be extra careful about how you use your power, since any misuse could have disastrous consequences.

We'll leave you to interpret GPT-3's reflection on AI risk in light of Holden's views. In the meantime, if you're interested in building solutions for increasing people's agency over their own lives, consider applying for the Social Impact Fellowship.