How to think about careers: building aptitudes

Based on an excellent talk on career planning by Emma Abela from the Global Challenges Project

This article is most helpful if you are already familiar with Effective Altruism (EA). If you are not, you might still find some of the ideas helpful.

The default EA career planning approach

Most EAs are not approaching career planning in the right way.

Let’s say you’re an undergrad at Waterloo studying chemical engineering and specializing in thermodynamics. At the same time, you are really passionate about ending factory farming. You might approach career planning like blocks of LEGO, and try to fit these two things together. How could I use thermodynamics to end factory farming?”

This is the default approach to EA career planning. The default career planning is totake a look at all the careers that are somehow classified as EA careers”. So this is everything mentioned by 80,000 hours.

And then you think about where do you fit in? What are your interests and how does it relate to these things? How doe your degree relate? You find the intersection of all those things, and that’s the career for you.

This is a bad approach to your career planning. There are three problems with this approach.

Problem #1: Leads to thinking so long as it counts” it’s good

You think about whether your career sounds like an EA career, as opposed to opposed to thinking about whether you’re actually trying to improve the world as much as you can.

Basically when another EA asks you about your career, you want to say something that clearly sounds like an EA career.

You want to say that you work on AI safety or drop some big name EA org that you work at, but what matters is not whether what you do can be called global priorities research, what matters is that you’re actually trying to do the most good.

Kelsey Piper is a journalist at Vox who writes EA related articles in the Future Perfect section.

Kelsey is someone who managed to avoid this trap. Even though journalism was not thought of as a main EA career, she saw that it was very high impact thing for her to do so she went for it rather than going for a really recognized EA career.

And she’s had massive counterfactual impact because of this.

Problem #2: Overemphasizes your current interests, skills and knowledge

Your interests can change a lot more than you might think.

This means that your current interests are much less important than you think they are. If you aim for what think is most important and really lean into that, you really will become interested in it.

So basically whatever you’re interested in right now , it doesn’t really matter. So it shouldn’t really play a major factor in your career decisions like it does with the default approach to EA career planning.

Don’t say:

What EA stuff are you interested in?”

Say:

What do you think is most important to change about the world?”

So as well as your interests, the default approach also overemphasizes your current skills and knowledge.

You may think you can’t contribute to AI safety because you didn’t study computer science in undergrad.

Fun fact: both the president of [Anthropic[(https://www.anthropic.com/), Daniella Amodei, as well as one of the co-founders of Anthropic Jack Clark, studied English literature in undergrad

Here’s another example. Jason Matheny.

Jason studied art history at U Chicago, but he didn’t look for a job relevant to art history when he graduated.

Instead, he thought that working on HIV prevention in India was important. So he did that and got a PhD on the economics of pharmaceutical development, but then … he didn’t stick to global health either.

He founded New Harvest, the first nonprofit dedicated to cultivated meat research. He was one of the first to popularize cultivated meat. I’m sure you can guess the pattern by now — he decided not to stick to that either.

Jason went into x-risk research, first doing biosecurity work at the Center for Health Security at Johns Hopkins University, and then was the director of research at the Future of Humanity Institute at Oxford University.

And then he shifted again to climbing the ranks of the US government to try to reduce x-risks. That way he went to IARPA and after six years became the director there, then he was the founding director of CSET, the Center for Security and Emerging technology at Georgetown University.

And now he’s in the White House in various important roles, including being the Deputy Director of National Security in the Office of Science and Technology Policy.

When you think about your career, think about people like Jason.

You don’t need to be held back by your background.

If you think something is important and really high impact, you should go and try to work on that problem, whether or not your past working was working on this problem or not.

Problem #3: What we most need is people who can figure out what to do

EA is talent-constrained, not funding constrained. The exact numbers quoted in this section are outdated because of the whole FTX debacle, but the spirit remains very much true.

We need people who can figure out what to do.

What kind of person is that? People with an entrepreneurial mindset - who can grapple with an ill-defined problem and actually contribute and understand what’s going on without a lot of guidance.

How does this relate to the default 80,00 hours career planning approach? The default approach tells you what to do instead of getting you to start figuring out what to do.

Figuring stuff out is what we most need young people to practice getting good at.

Imagine two people who both check out the 80k website and think that AI safety research seems like a very important thing to do.

One person ends their investigation there start taking starts taking computer science classes and applying to jobs and 80k job board.

And the other person spends hundreds of hours digging into AI safety stuff, try and understand what’s going on and trying to understand ADKs reasoning for recommending this.

The second person is much more likely to help prevent AI risk because they’re getting into the head space and the practice of figuring out for themselves - What do I actually think we should do about AI risk?”

Building aptitudes and understanding

What is an approach that avoids the problems outlined above?

You have two missions

  1. building your aptitudes to get really good at something useful
  2. building your understanding of how to improve the world

If you keep pursuing both of these missions, they’ll come together and allow you to continually find ways to improve the world.

How do you actually do this?

Step 1: Check out this general list of aptitudes by Holden Karnofsky.

Step 2: Try to get very good at one of the aptitudes

One counter-intuitive result of this approach is that you want to choose internships and side projects more based on whether they help you build the desired aptitude than whether they’re high impact.

This is somewhere a lot of students in EA are making a mistake.

If you want to get better at the organization building, running, and boosting” aptitude, you want to look for an organization that is quickly growing and has good organizational capacity, and has people there that you’ll learn a lot from.

That’s much more important than if they happen to be working on something high impact.

How do you get funding to do all this stuff? Open Philanthropy provides early career funding e.g. if you want to apply to do another degree, self-study for a bootcamp etc that you think will put you in a better position to improve the long-term future. The Long-Term Future Fund is another great option for funding.

Unlike the default career planning approach, this isn’t something that you just do once, and then you have a clear path to follow.

You keep doing it and have it build up with time.

If you’re going to do one thing now that you finished reading this blogpost, figure out how you’re going to set aside time to do this.

It could be every Sunday, or every day for an hour, or intensely for a couple of weeks over the summer.

Whatever you decide, we highly recommend you decide right now.


Date
February 6, 2023