Why subscribe?

About me

I’m a senior at the University of Pennsylvania studying Logic, Information, and Computation. I primarily care about solving the Alignment Problem: making sure that the creation of powerful artificial intelligence systems goes well. To learn more, read this post by Anthropic, a leading AI lab (like OpenAI), founded specifically to make sure AI development is positive and safe.

My specific interest is in avoiding the absolute worst case catastrophes. We call these worst case catastrophes suffering risks, or s-risks for short.

Suffering risks, or s-risks, are “risks of events that bring about suffering in cosmically significant amounts” (Althaus and Gloor 2016).

You can read more about them here.

Much of my personal motivation for caring about this problem in particular is my support of suffering-focused ethics. You can read a bit about suffering-focused ethics here. I’ve also written a bit about it here.

How do I hope to make headway this problem?

  • Technical research

  • Philosophy

    • Philosophy of Mind

    • Normative and Meta Ethics

    • Phenomenology

  • Outreach

    • Convincing technical researchers that this is a problem worth thinking and caring about

    • Potentially convincing the general public that this is a problem worth thinking and caring about

Stay up-to-date

You won’t have to worry about missing anything. Every new edition of the newsletter goes directly to your inbox.

User's avatar

Subscribe to Brandon’s Newsletter

whatever I'm currently thinking about. it's almost always somehow related to suffering-focused ethics, decision making, and AI alignment. that sometimes means discussing procrastination, productivity, and intentionality.

People

Pick up your pencil, and just start writing - David Giaramita