Artificial Intelligence
Last updated: June 14, 2025
I think AI is probably the most important thing of our time. AI developments are going very fast, and we are not prepared for how this will affect our society and culture. To quote Stephen Fry, when Benz first demonstrated a gasoline-powered carriage, ‘not one single person would declaim — “Yes! I foresee interstate highways three or four lanes wide crisscrossing the nations, I foresee flyovers, bypasses, Grand Prix motor racing, traffic lights, roundabouts, parking structures ten, twenty storeys high, traffic wardens, whole towns and cities entirely shaped by these contrivances.” No one would have seen a thousandth part of such a future.’.
I recommend starting with Stephen Fry’s post that the quote is from. It is a great accessible and non-technical introduction to the topic.
Introductory Explainers
To learn more about the problem of AI safety, I recommend starting with the following articles:
- Why AGI could be here by 2030 - After reading the Fry article, I would recommend starting here to see what has been going on recently.
- The Most Important Time in History Is Now - Short overview of recent developments, and how fast it’s going.
- The AI Revolution: The Road to Superintelligence - from 2015, but with some great graphics. Has an even better part 2
- The Problem - An explanation by the Machine Intelligence Research Institute (MIRI), one of the first organisations focused on AI Risk, on why they think building Artificial Super Intelligence is dangerous.
What Can I Do?
- Contact your representatives and ask them to support AI regulation.
- Talk to your friends and acquaintances about AI Safety, and how they can help.
- Donate to AI Safety causes
- See if your skills can be applied to AI Safety, and change career if you think it’s a good fit.
- Complete BlueDot’s 2 hour Future of AI Course explaining the societal impacts of Artificial General Intelligence, and join their community.
If you want to talk to someone about how you can help, check out this page of advisors (I recommend 80,000 hours and AI Safety Quest), or email me at navigation@mickzijdel.com.
Staying Informed
- Zvi Mowshowitz - Long articles, but covers basically everything.
- AI Safety Events and Training - Weekly update on upskilling opportunities and other events in AI Safety
- Transformer - Weekly newsletter that covers the latest developments in AI, with occassional deep dives.
- AISafety.com’s Staying Informed page
Resources I Share Frequently
- aisafety.com - Compilations of resources on getting involved in AI Safety.
- AI Safety Support’s Lots of Links page
- Interesting Areas:
- Theoretical technical research: Iliad conference and people who go there. Singular Learning Theory (Timaeus), Agent Foundations, Scalable Oversight, Safety-by-Debate
- Applied technical research: Evals, control, Mechanistic Interpretability (upskilling resources, 80,000 hours career review)
- Technical governance (open problems paper)
- Governance (80,000 hours career review)
- Advocacy
- Operations (80,000 hours career review)
More Reading
Deeper Dives
- The AI Safety Atlas - Comprehensive textbook about all aspects of AI Safety.
- aisafety.dance - Visual introduction to technical AI Safety by Nicky Case.
- Robert Miles AI Safety - Videos explaining technical concepts behind modern AI systems, and why it might be dangerous.
- Or if you just want to look at AI Safety memes, be my guest.
Scenarios
- AI 2027 - A very detailed scenario for the coming ~5 years released in April 2025, written by people who have previously made good predictions.
- How AI Takeover Might Happen in 2 Years by Joshua Clymer
- A History of the Future by Rudolf Laine
Other Reading Tips
- Capital, AGI, and Human Ambition, or “By default, capital will matter more than ever after AGI”
- Your A.I. Lover Will Change You
- Your Campus Already Has AI — And That’s the problem