The Intimate Relationship between Stephen Hawking and Artificial Intelligence (AI)

Stephen Hawking and Artificial Intelligence (AI)

Stephen Hawking died a few days ago after suffering from amyotrophic lateral sclerosis (ALS) for 55 years. I read every book he wrote or coauthored and regularly searched the Internet for new insights into him and his thoughts about the subjects that interested him.Recently I wrote a blog post inspired by model-dependent realism, a concept introduced to me in a book, The Grand Design, co-authored by Hawking.

Among the many subjects Hawking addressed in his writings, artificial intelligence (AI) stands out, not because the subject itself fascinated me (though it does), but because Hawking had an extraordinary relationship with AI; he considered AI to be both his most treasured asset late in his life, and mankind's most formidable future threat.

In 1985 Hawking contracted pneumonia and was placed on a mechanical ventilator. Doctors asked his wife at the time, Jane, if she wanted life support withdrawn. She stubbornly declined and arranged to have Hawking transported to a hospital in Cambridge where doctors controlled the pneumonia and performed a tracheotomy that irreversibly sacrificed Hawking's ability to speak.

Hawking began communicating using cumbersome aids such as lifting an eyebrow when the letter of his choosing was shown to him from a deck of spelling cards.

Digital technology came to Hawking's aid in the form of an Apple II computer running a program branded Equalizer connected to a primitive speech synthesizer. Hawking could spell words at the rate of 15 words per minute by pressing a hand-held button as letters appeared on Apple's computer screen. Over time, ALS robbed Hawking of his ability to press the hand-held button. Even with modifications allowing Hawking to select letters and other functions with cheek movements and a primitive word prediction program, Hawking's communication speed degraded to a couple words per minute. Nevertheless, for the first time, Hawking had experienced the personal benefit of a technology that would mature into rudimentary forms of AI.

A number of experiments subsequently failed (caps that read brain waves, technology to track eye movements, etc.). By 2015, elements of AI were beginning to greatly improve Hawking's ability to communicate. Programs were becoming context-aware using Hawking's vocabulary and writing style to predict whole words or phrases using a database containing the text of his lectures, books, and other writings. Hawking operated the software using spectacles with an infrared switch that detected small facial movements that controlled the cursor and selected a function to perform.

While AI facilitated Hawking's communication, two years before his death he warned that it could, when fully developed, destroy the human species. He is not alone in this belief: Bill Gates, Steve Wozniac, Elon Musk have expressed less apocalyptic versions of the same concern. Hawking's fear is based on the fact that there is no theoretical limitation that prevents a computer from emulating, or even exceeding, human intelligence. Although this view is void of a higher intelligence (e.g., a god) having imbued humans with supernatural characteristics that cannot be replicated by a machine, it is consistent with Hawking's belief that a creator is not necessary to explain how the universe began; all of what we know of the universe is, in his mind, explainable by the laws of nature without outside intervention. Therefore, it is reasonable for him to expect that humans can eventually create machines capable of having all the qualities of humans including empathy, emotional intelligence,and malice. Self replication machines could then be as destructive as humans having nuclear weapons or worse. Whether such machines evolve to protect or destroy humans is unknowable but we must be aware of the possibilities as technology progresses.

Professor Maggie Boden, with five decades of deep involvement in artificial intelligence research, is less concerned with death by hostile machines than with more immediate impacts on our lives. She worries more about the impact of robot's and other AI machine's effects on the course of human behavior. For example, there will be strong financial incentives to cut costs by substituting machines for humans. What happens to the elderly dependent on  care from machines in place of human contact, or self driving cars that fail to respect pedestrians of fail to recognize a person in need of emergency attention? [As an aside, as I write this, an Uber self-driving car struck and killed a woman crossing a street in Arizona]. It doesn't take a huge stretch of imagination to think that smart phones and other devices filled with AI will dramatically change the world's cultures and politics in ways that threaten human existence sooner than will malicious robots.



"There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

"...speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems."

Security: How to prevent intentional manipulation by unauthorized parties. Control: How to enable meaningful human control over an AI system after it begins to operate. (OK, I built the system wrong; can I fix it?)

(AI) "... could be applied to the detection of intrusions (Lane 2000), analyzing malware (Rieck et al. 2011), or detecting potential exploits in other programs through code analysis (Brun and Ernst 2004). It is not implausible that cyberattack between states and private actors will be a risk factor for harm from near-future."

Stanford’s One-Hundred Year Study of Artificial Intelligence includes loss of control of AI systems as an area of study, specifically highlighting concerns over the possibility that … we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems would threaten humanity.

It seems self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is the duty of AI researchers to ensure that the future impact is beneficial.























No comments:

Post a Comment