Will AI kill us? Or Save us?
Image: Alex Matthews/Qualcomm Institute
I’m convinced that artificial intelligence will eventually exceed human intelligence, and not just because I get a little dumber every time, I check twitter. It’s just that I think there’s nothing particularly special about the human brain that can’t be reproduced, or be improved, with a computer. That we’re close to actually creating intelligent beings is amazing -- and very dangerous. The entire internet intelligencia has been discussing this back and for a couple of years now. And today I have a collection of all the vocabulary that you need to chime in.
If you want talk about the existential risk posed by AI, the first phrase you need to drop is “paperclip maximiser”. Yes, paperclips. The paperclip maximiser is a fictional AI thought up by Nick Bostrom. It’s tasked with producing the largest possible number of paperclips. To reach that goal, it clears Earth from humans and turns the entire planet into a paperclip factory.
A similar example comes from Marvin Minsky, one of the founders of MIT’s famous AI lab. In his example the AI is tasked with solving the Riemann Hypothesis and humans are in the way.
I’ve never found these arguments particularly convincing. The paperclip maximiser needs to be intelligent enough to kill several billion humans, and yet never questions whether producing paperclips is a good use of its time. That doesn’t seem plausible to me.
Keep reading with a 7-day free trial
Subscribe to Science without the gobbledygook to keep reading this post and get 7 days of free access to the full post archives.