When it comes to artificial intelligence, one of the most commonly debated issues in the technology community is safety — so much so that it helped lead to the ouster of OpenAI's co-founder, Sam Altman.
Those concerns boil down to a truly unfathomable one: Will AI kill us all? Allow me to set your mind at ease: Artificial intelligence is no more dangerous than the many other existential risks facing humanity, from supervolcanoes to stray asteroids to nuclear war.
I am sorry if you don’t find that reassuring. But it is far more optimistic than what someone like AI researcher Eliezer Yudkowsky believes, namely that humanity has entered its last hour. In his view, AI will be smarter than us and will not share our goals, and soon enough we humans will go the way of the Neanderthals. Others have called for a six-month pause of AI progress, so we humans can get a better grasp of what is going on.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.