Stephen Hawking and Artificial Intelligence

Elon Musk says Artificial Intelligence
is a “fundamental existential risk”,
Mark Zuckerberg called his comments
irresponsible. Who is correct?


Stephen Hawking died a few days ago having survived an astounding 55 years with amyotrophic lateral sclerosis (the average victim of ALS survives 2 - 5 years). Hawking had a close relationship with artificial intelligence (AI) both depending on it to communicate after losing his ability to speak, and fearing it's prospect of destroying mankind.

Hawking lost his ability to speak in 1985 following a life threatening bout with pneumonia. Doctors
recommended he be removed from life support but his then wife, Jane, insisted he be moved to a different hospital where his infection was cured and a tracheotomy allowed him to breathe without a ventilator, but robbed him of speech. He communicated by raising an eyebrow when his desired letter was displayed from a deck of spelling cards. Communicating was slow, producing a word every minute or so.

Digital technology came to Hawking's aid in the form of an Apple II computer running a program, branded Equalizer, connected to a primitive speech synthesizer. Hawking could spell words at the rate of 15 words per minute by pressing a hand-held button as letters appeared on Apple's computer screen.

Over time, ALS deprived Hawking of his ability to press the hand-held button. A primitive AI application came to the rescue in the form of a word prediction program. But even with modifications allowing Hawking to select letters and other functions with cheek movements, Hawking's communication speed degraded to a couple words per minute.

A number of experiments designed by Intel subsequently failed (caps that read brain waves, technology to track eye movements, etc.). By 2015, elements of AI were advancing rapidly and greatly improved Hawking's ability to communicate.

Intel provided Hawking with a context aware program called Assistive Context-Aware Toolkit (ACAT) using Hawking's vocabulary and writing style to predict whole words or phrases. The program used a database containing the text of his lectures, books, and other writings. The technology worked much like your smart phone suggesting words and phrases before you complete typing a text message. In place of typing, Hawking operated the software using spectacles with an infrared switch whuch detects small facial movements that control the cursor and select functions to perform.

Over time Hawking became more concerned with the balance between the benefits of AI and its potential for harm. He was not alone. An open letter signed by Hawking, Steve Wozniak, Elon Musk, and 8000 others including the most prominent AI scientists in the world states: "Stanford’s One-Hundred Year Study of Artificial Intelligence includes loss of control of AI systems as an area of study, specifically highlighting concerns over the possibility that … we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems would threaten humanity."

In a talk at the Web Summit technology conference in Lisbon, Portugal, Hawking stated (AI) could be the "worst event in the history of our civilization".


Not all concerns about AI are as apocalyptic. Professor Maggie Boden, with five decades of deep involvement in artificial intelligence research, is less concerned with death by hostile machines than with more immediate impacts on our lives. She worries more about the impact of robot's and other AI machines' effects on the course of human behavior. For example, there will be strong financial incentives to cut costs by substituting machines for humans. What happens to the elderly person dependent on care from machines in place of human contact, or self driving cars that fail to respect pedestrians or fail to recognize a person in need of emergency attention? [As I write this, the first fatal accident caused by an autonomous car was reported; a self driving Uber car struck and killed a woman crossing a street in Arizona].

It doesn't take a huge stretch of imagination to think that smart phones and other devices filled with AI will dramatically change the world's cultures and politics in ways that threaten human existence sooner than will malicious robots.

If humanity is not threatened by AI, its institutions are. The open letter referenced above contained a prophetic warning a decade before our present political situation: "...cyberattack between states and private actors will be a risk factor for harm from near-future." Witness the current use of sophisticated algorithms that can influence individuals based on the information obtainable from our personal online activity and other publicly available data.


I leave it to you to judge AI for yourself. My own view is that AI researchers are prudent to heed the closing message of a document putting forth long term research priorities for AI:

"It seems self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is the duty of AI researchers to ensure that the future impact is beneficial."






The Intimate Relationship between Stephen Hawking and Artificial Intelligence (AI)

Stephen Hawking and Artificial Intelligence (AI)

Stephen Hawking died a few days ago after suffering from amyotrophic lateral sclerosis (ALS) for 55 years. I read every book he wrote or coauthored and regularly searched the Internet for new insights into him and his thoughts about the subjects that interested him.Recently I wrote a blog post inspired by model-dependent realism, a concept introduced to me in a book, The Grand Design, co-authored by Hawking.

Among the many subjects Hawking addressed in his writings, artificial intelligence (AI) stands out, not because the subject itself fascinated me (though it does), but because Hawking had an extraordinary relationship with AI; he considered AI to be both his most treasured asset late in his life, and mankind's most formidable future threat.

In 1985 Hawking contracted pneumonia and was placed on a mechanical ventilator. Doctors asked his wife at the time, Jane, if she wanted life support withdrawn. She stubbornly declined and arranged to have Hawking transported to a hospital in Cambridge where doctors controlled the pneumonia and performed a tracheotomy that irreversibly sacrificed Hawking's ability to speak.

Hawking began communicating using cumbersome aids such as lifting an eyebrow when the letter of his choosing was shown to him from a deck of spelling cards.

Digital technology came to Hawking's aid in the form of an Apple II computer running a program branded Equalizer connected to a primitive speech synthesizer. Hawking could spell words at the rate of 15 words per minute by pressing a hand-held button as letters appeared on Apple's computer screen. Over time, ALS robbed Hawking of his ability to press the hand-held button. Even with modifications allowing Hawking to select letters and other functions with cheek movements and a primitive word prediction program, Hawking's communication speed degraded to a couple words per minute. Nevertheless, for the first time, Hawking had experienced the personal benefit of a technology that would mature into rudimentary forms of AI.

A number of experiments subsequently failed (caps that read brain waves, technology to track eye movements, etc.). By 2015, elements of AI were beginning to greatly improve Hawking's ability to communicate. Programs were becoming context-aware using Hawking's vocabulary and writing style to predict whole words or phrases using a database containing the text of his lectures, books, and other writings. Hawking operated the software using spectacles with an infrared switch that detected small facial movements that controlled the cursor and selected a function to perform.

While AI facilitated Hawking's communication, two years before his death he warned that it could, when fully developed, destroy the human species. He is not alone in this belief: Bill Gates, Steve Wozniac, Elon Musk have expressed less apocalyptic versions of the same concern. Hawking's fear is based on the fact that there is no theoretical limitation that prevents a computer from emulating, or even exceeding, human intelligence. Although this view is void of a higher intelligence (e.g., a god) having imbued humans with supernatural characteristics that cannot be replicated by a machine, it is consistent with Hawking's belief that a creator is not necessary to explain how the universe began; all of what we know of the universe is, in his mind, explainable by the laws of nature without outside intervention. Therefore, it is reasonable for him to expect that humans can eventually create machines capable of having all the qualities of humans including empathy, emotional intelligence,and malice. Self replication machines could then be as destructive as humans having nuclear weapons or worse. Whether such machines evolve to protect or destroy humans is unknowable but we must be aware of the possibilities as technology progresses.

Professor Maggie Boden, with five decades of deep involvement in artificial intelligence research, is less concerned with death by hostile machines than with more immediate impacts on our lives. She worries more about the impact of robot's and other AI machine's effects on the course of human behavior. For example, there will be strong financial incentives to cut costs by substituting machines for humans. What happens to the elderly dependent on  care from machines in place of human contact, or self driving cars that fail to respect pedestrians of fail to recognize a person in need of emergency attention? [As an aside, as I write this, an Uber self-driving car struck and killed a woman crossing a street in Arizona]. It doesn't take a huge stretch of imagination to think that smart phones and other devices filled with AI will dramatically change the world's cultures and politics in ways that threaten human existence sooner than will malicious robots.



"There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

"...speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems."

Security: How to prevent intentional manipulation by unauthorized parties. Control: How to enable meaningful human control over an AI system after it begins to operate. (OK, I built the system wrong; can I fix it?)

(AI) "... could be applied to the detection of intrusions (Lane 2000), analyzing malware (Rieck et al. 2011), or detecting potential exploits in other programs through code analysis (Brun and Ernst 2004). It is not implausible that cyberattack between states and private actors will be a risk factor for harm from near-future."

Stanford’s One-Hundred Year Study of Artificial Intelligence includes loss of control of AI systems as an area of study, specifically highlighting concerns over the possibility that … we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems would threaten humanity.

It seems self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is the duty of AI researchers to ensure that the future impact is beneficial.