dmeh.net

The Last Invention

A history of the singularity, piece by piece, as explored by the thinkers who got us to where we are.

1863

"Darwin Among the Machines"

Samuel Butler

A letter to a New Zealand newspaper, just four years after Origin of Species, where Butler applies Darwinian logic to machines: they're evolving, and faster than we are.

Read at Project Gutenberg Full text - Samuel Butler
1942–1950

I, Robot & the Three Laws

Isaac Asimov

Asimov invents the word "robotics" and gives us the Three Laws of Robotics. The stories then pressure test the laws in various ways and show all the insidious manners in which they can be subvertred.

Watch the Bill Moyers interview (1988) Video - Isaac Asimov
1950

"Computing Machinery and Intelligence"

Alan Turing

Can machines think? Turing proposes The Imitation Game, which laid the foundation of what we now know as "The Turing Test". Surprisingly readable tbh.

Read the paper (PDF) Paper - Alan Turing
1958

The von Neumann Singularity

Stanisław Ulam, recalling John von Neumann

From von Neumann's obituary: the two had discussed "the ever accelerating progress of technology... which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." Origin of the term I think? (Correct me if I"m wrong)

Read in Bulletin of the AMS (PDF) Paper - Stanisław Ulam
1962–1964

Profiles of the Future / BBC Horizon

Arthur C. Clarke

Clarke predicts machines will "start to think, and eventually completely out-think their makers."

Arthur C. Clarke - BBC Horizon (1964)

Arthur C. Clarke - BBC Horizon (1964). Watch video

1965

"Speculations Concerning the First Ultraintelligent Machine"

I.J. Good

Good worked with Turing at Bletchley Park. This paper gives the intelligence explosion its first rigorous formulation: if a machine can improve its own design, you get recursive self-improvement. This is basically the next "threshold" we are approaching so worth reading at this current moment in time. RSI is all the hype.

Read the paper (PDF) Paper - I.J. Good
1988

Mind Children

Hans Moravec

Moravec argues robots will reach human-level intelligence by the 2040s. Good place to start, if you're not going to start with Kurzweil.

1993

"The Coming Technological Singularity"

Vernor Vinge

The paper that named it. Presented at a NASA symposium. Vinge's core argument: we can't model what comes after superintelligence, any more than a goldfish can model economics. (Vinge is also one of my favorite sci-fi writers, his stuff is great.)

Vernor Vinge - Groupminds, Singularity University (2012)

Vernor Vinge - Groupminds, Singularity University (2012). Watch video

Read at SDSU Source - Vernor Vinge
1998

Robot: Mere Machine to Transcendent Mind

Hans Moravec

Moravec doubles down with specific forecasts. Haven't read this one yet.

1999

The Age of Spiritual Machines

Ray Kurzweil

Kurzweil draws exponential curves through the history of computation and extends them forward. Machines match human intelligence by 2029 (Seemed ridiculous when I first read it and now it seems insanely on point?). The law of accelerating returns. To be clear though, while his timelines are seeming super prescient it doesn't really feel like we've taken the Kurzweil path to get here.

2000

The Singularity Institute for Artificial Intelligence

Eliezer Yudkowsky

Our first encounter with EY. He founds what will eventually become MIRI. The original premise was centered around building superintelligence, it quickly shifts to figuring out how to control it.

intelligence.org Source - Eliezer Yudkowsky
2005

The Singularity Is Near

Ray Kurzweil

The one that brought it mainstream. By 2045, human and machine intelligence merge. Exhaustively researched, relentlessly optimistic. You'll either find it prophetic or maddening, possibly both. This is where I first met the term, after grabbing this book from my parents' bookshelf as a teenager.

2010

"The Singularity: A Philosophical Analysis"

David Chalmers

The "hard problem of consciousness" guy takes the singularity seriously and applies analytic philosophy to it. Useful because he's rigorous about what actually follows from the premises vs. what's hand-waving.

Read the paper (PDF) Paper - David Chalmers
2012

Centre for the Study of Existential Risk (CSER)

Huw Price, Martin Rees, Jaan Tallinn

Cambridge establishes a research center for existential risk. The Astronomer Royal, a philosopher, and a Skype co-founder. The academy starts taking this seriously.

cser.ac.uk Source - Huw Price, Martin Rees, Jaan Tallinn
2014

Superintelligence: Paths, Dangers, Strategies

Nick Bostrom

Bostrom maps the paths to superintelligence and the ways it could go wrong. The paperclip maximizer comes from here. This might be the single most important book on the list if you're only going to read one.

Nick Bostrom - What happens when our computers get smarter than we are? (TED 2015)

Nick Bostrom - What happens when our computers get smarter than we are? (TED 2015). Watch video

2015

Open Letter on AI Safety

Future of Life Institute - Stuart Russell, Stephen Hawking, et al.

Thousands of researchers sign an open letter calling for AI safety research. Safety goes from fringe to recognized research priority.

futureoflife.org Source - Future of Life Institute
2016

"Concrete Problems in AI Safety"

Dario Amodei, Chris Olah, et al. (Google Brain)

Translates abstract alignment concerns into concrete research problems: reward hacking, scalable oversight, distributional shift. The paper that made safety legible to working ML researchers.

Read on arXiv Paper - Dario Amodei, Chris Olah, et al.
2017

Life 3.0

Max Tegmark

Accessible survey of the superintelligence landscape. Tegmark maps outcomes from utopia to extinction. A little on the pop-sci end but worth reading.

2020

GPT-3

OpenAI

175 billion parameters. Writes code, composes essays, reasons (sort of). The discourse shifts from "if and when" to "how fast."

Read on arXiv Paper - OpenAI
2022

"AGI Ruin: A List of Lethalities"

Eliezer Yudkowsky

Yudkowsky's most comprehensive case for why alignment is extremely difficult and the default outcome is bad. Probably the best piece to read if you want to really know his argument. Hard to argue with the idea honestly if you haven't read this.

Read on LessWrong Source - Eliezer Yudkowsky
2022

Reality+

David Chalmers

Chalmers on virtual worlds, simulation, and what happens to "reality" when intelligence can be manufactured. Broader than the singularity specifically but relevant.

2025

The Scaling Era: An Oral History of AI, 2019–2025

Dwarkesh Patel

A collection of thoughts and interviews compiled from interviews from the Dwarkesh podcast. Great read.

Stripe Press Book - Dwarkesh Patel