The notion that humanity is on the verge of a “singularity”—that common people will eventually be replaced by artificially intelligent robots, cognitively enhanced biological intelligence, or both—has emerged from science fiction and entered the domain of serious discussion. According to some singularity theorists, the singularity could occur in the middle of the current century if the field of artificial intelligence (AI) continues to advance at its current breathtaking rate. The concept of singularity is introduced by Murray Shanahan, who also discusses the effects of such a possibly seismic occurrence.
Instead of making forecasts, Shanahan wants to look at a variety of possibilities. Whether we think the singularity is imminent or remote, probable or unlikely, doomsday or utopia, the concept itself presents important philosophical and practical issues that force us to reflect carefully on what we want as a species.
Shanahan describes technological developments in AI that were both created from scratch, and we’re biologically influenced. He says that once human-level AI has been accomplished—which is theoretically possible but challenging to do—the shift to superintelligent AI might happen very quickly. Shanahan speculates on the implications of superintelligent computers on concepts like personhood, accountability, rights, and identity. Superhuman AI bots might be developed for humankind’s benefit, while others might turn rogue. (Is Siri or HAL the model?) The singularity presents both an existential danger and an existential chance to overcome our limitations. Shanahan makes it abundantly evident that if we want to achieve the best results, we must consider both scenarios.