|Elon Musk says Artificial Intelligence|
is a “fundamental existential risk”,
Mark Zuckerberg called his comments
irresponsible. Who is correct?
Hawking lost his ability to speak in 1985 following a life threatening bout with pneumonia. Doctors
Digital technology came to Hawking's aid in the form of an Apple II computer running a program, branded Equalizer, connected to a primitive speech synthesizer. Hawking could spell words at the rate of 15 words per minute by pressing a hand-held button as letters appeared on Apple's computer screen.
Over time, ALS deprived Hawking of his ability to press the hand-held button. A primitive AI application came to the rescue in the form of a word prediction program. But even with modifications allowing Hawking to select letters and other functions with cheek movements, Hawking's communication speed degraded to a couple words per minute.
A number of experiments designed by Intel subsequently failed (caps that read brain waves, technology to track eye movements, etc.). By 2015, elements of AI were advancing rapidly and greatly improved Hawking's ability to communicate.
Over time Hawking became more concerned with the balance between the benefits of AI and its potential for harm. He was not alone. An open letter signed by Hawking, Steve Wozniak, Elon Musk, and 8000 others including the most prominent AI scientists in the world states: "Stanford’s One-Hundred Year Study of Artificial Intelligence includes loss of control of AI systems as an area of study, specifically highlighting concerns over the possibility that … we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems would threaten humanity."
In a talk at the Web Summit technology conference in Lisbon, Portugal, Hawking stated (AI) could be the "worst event in the history of our civilization".
It doesn't take a huge stretch of imagination to think that smart phones and other devices filled with AI will dramatically change the world's cultures and politics in ways that threaten human existence sooner than will malicious robots.
If humanity is not threatened by AI, its institutions are. The open letter referenced above contained a prophetic warning a decade before our present political situation: "...cyberattack between states and private actors will be a risk factor for harm from near-future." Witness the current use of sophisticated algorithms that can influence individuals based on the information obtainable from our personal online activity and other publicly available data.
I leave it to you to judge AI for yourself. My own view is that AI researchers are prudent to heed the closing message of a document putting forth long term research priorities for AI:
"It seems self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is the duty of AI researchers to ensure that the future impact is beneficial."