© 2024 Innovation Trail

Superintelligence: Triumph Or Threat?

I recently started reading Superintelligence, a new book by Oxford University philosopher Nick Bostrom, who is also director of the Future of Humanity Institute. (Now, that's a really cool job title.)

Bostrom is well-known for his famous argument that there is a real chance that we live in a simulation, or, even more dramatically, that we are a computer simulation. The same way that we, today, play video games with characters that progressively resemble real people, it's possible to imagine a future where computers are so sophisticated that simulations (the "games") are essentially indistinguishable from reality. That being the case, asks Bostrom, how do we know we are not in an alien-developed simulation, or in a simulation that our own descendants created?

This is the main idea in a few sci-fi movies, the most famous being The Matrix, with Keanu Reeves as Neo, the redeemer of our slavery. The idea explores the fact that our brains gather information from reality through our sensory organs; if we bypass them, feeding information straight to the relevant parts of the brain, we can trick it to think it lives in a world that is a fabrication. The virtual would be the real.

In Superintelligence, which I will review in more detail soon, Bostrom explores a different scenario, no less disturbing. If we are able to create superintelligent machines in the not-so-distant future, how can we make sure that they will not also be our doom?

Bostrom gives two examples, right at the beginning of the book. He talks about our relationship with gorillas, how their survival depends on our goodwill. We know that there is a big tension between clandestine hunting and killing of these wonderful animals, and efforts to preserve them. Similarly, will we become the future gorillas, our fate depending on the goodwill of the new machines?

In another example, Bostrom tells the fable of the sparrows that, tired of having to make nests and hunt for food, decided to find an owl to take care of their needs. It could also protect them against predators, like the neighbor's cat. While most found the idea brilliant, a minority oppose it, raising the obvious fact that they had no clue how to domesticate an owl. How could they learn how to do that without having one in hand? The fable ends with a group of sparrows going out into the world to search for an owl's egg: There is no real ending.

Likewise, our fable with artificial intelligence machines has no real ending. The crucial question is how we will decide to end it.


Marcelo Gleiser's latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning. You can keep up with Marcelo on Facebook and Twitter: @mgleiser.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Tags
Marcelo Gleiser
Marcelo Gleiser is a contributor to the NPR blog 13.7: Cosmos & Culture. He is the Appleton Professor of Natural Philosophy and a professor of physics and astronomy at Dartmouth College.