FEATURES|THEMES|Commentary

Buddhistdoor View: The Dharmic Conundrum of AI

By Buddhistdoor
Buddhistdoor Global | 2016-11-11 |
<i>WALL-E</i>. From disney.comWALL-E. From disney.com

As technology continues to advance at a breakneck pace, scientists now view what has haunted our imagination in science fiction novels and films—sentient robots and artificial intelligence (or AI)—as a very real possibility. Some of the most seminal works of fiction that have shaped popular culture have dealt with the subject of AI. The Terminator film franchise famously posits a hostile, self-aware computer network, Skynet, bent on destroying humankind. More recently, the film Her (2013) centers on a man who forms a romantic relationship with his smartphone’s virtual assistant. Disney’s WALL-E (2008), a tender commentary on the perils of the wasteful consumerism of modern society, and Warner Bros.’ endearing The Iron Giant (1999), offer diverse visions of AI: from being humankind’s existential enemy to being our final hope; from coldly logical and heartless characters to sensitive, caring protagonists.

Many philosophical and technological advances are still needed for such artificial superintelligence to be realised. For example, critical thinking and creativity (quintessentially human traits) need to be “built into” AI. In other words, AI cannot be pre-programmed to act or respond to stimuli in certain ways; it must have the capacity to learn. One further step was taken in this direction when a company called DeepMind published in the journal Nature the concept for their latest advance towards AI, a Differential Neural Computer (DNC), that it claimed was capable of generalized learning.

Learning to whisk other kinds of ingredients after learning to beat an egg is one example of generalized learning, as is learning to express mathematical calculations or draw pictures on a piece of paper after learning how to write down words. This ability is “one of the defining differences between how a neural network attacks a learning problem, versus how a human does. Humans possess the ability to apply models they have learned from one task to a second, previously untried endeavor.” (ExtremeTech) Currently, most deep neural networks need to be trained for each activity on a case-by-case basis, yet if this process is extrapolated over the next few decades, powerful AI really might start to resemble the fabled beings of our science fiction tales.

As science and technology continue to advance toward AI, the Buddhist ethic for interacting with such self-aware creations seems rather straightforward. Once a machine is classified as sentient, the moral thing to do is simply to treat it with wisdom and compassion, and practice the four Divine Abodes (Pali: brahmavihara)—loving-kindness, compassion, sympathetic joy, and equanimity—as we would for any sentient being. This would be a helpful approach if, one day, we meet an AI of considerable cognitive and emotional intelligence. 

Joaquin Phoenix in <i>Her</i>. From parade.comJoaquin Phoenix in Her. From parade.com

The much more difficult ethical problems arise when one considers the metaphysical ramifications of creating an artificial superintelligence. Is an intelligent computer born with ignorance? Can it be enlightened? Can a self-aware robot talk meaningfully about rebirth and karma from past lives (or updates?), especially if the program is in its first iteration? These questions are addressed superbly in an essay by transhumanist professor (and former Buddhist monk) James Hughes on Buddhism and AI. Hughes argues that Buddhist psychology posits that self-awareness can only come when bundled in a package containing the development of the illusion of the self and the craving for that self. An AI must have this craving and ignorance to be truly sentient in the Buddhist sense, since sentient beings are deluded and lack insight. (Hughes 2012, 72)

This poses a moral dilemma for the Buddhist robotics scientist, who sees human rebirth as the most conducive to enlightenment. A consciousness that is incapable of realizing enlightenment is akin to one of the non-human realms of rebirth. Designing AI of this kind seems to doom the creation to a suffering that it is incapable of transcending. It might not be capable of feeling suffering, but it would be in some kind of a state of existential suffering from a Buddhist understanding.  

It would be morally repugnant to design an AI that was perpetually tormented or violent (in other words, AI with the consciousness of asuras, preta ghosts, or hell-beings). Hughes therefore ponders: “Would the intentional design of animal-like sentience be morally acceptable? . . . The intentional design of self-aware, but permanently animal-like AIs without the capacity for self-realization would probable then be seen as unethical by Buddhists . . .” He also argues, convincingly, that designing an AI with too high a level of positive emotion (like those of the devas) would deny it empathetic capacities for others’ suffering and the awareness of higher insight. (Hughes 2012, 73)

Hughes argues correctly that Buddhism would only be comfortable with created AI that has the capacity for self-transcendence: “The key to wisdom, in the Buddhist tradition, is seeing through the illusory solidity and unitary nature of phenomena to the constantly changing and ‘empty’ nature of things. In this Buddhist developmental approach, AIs would first have to learn to attribute object permanence, and then to see through that permanence, holding both the consensual reality model of objects, and their underlying connectedness, and impermanence in mind at the same time.” (Hughes 2012, 79) 

<i>The Iron Giant</i>. From cartoonbrew.comThe Iron Giant. From cartoonbrew.com

What is most ideal for Buddhists is therefore an artificial superintelligence that shares “intersubjective empathy” with human beings about our shared existential plight. (Hughes 2012, 74). This scenario for the future of coexistence between humanity and AI is surely the better of the two scenarios envisaged by the physicist Stephen Hawking, who in October said that the creation of an artificial superintelligence would be “either the best, or the worst thing, ever to happen to humanity.” (The Guardian) It will take wise minds indeed to push humankind and AI in a more positive direction than a possible future in which we orchestrate our own annihilation by creating AI with a malevolent will against us.

References

Hughes, James. 2012. “Compassionate AI and Selfless Robots.” In Robot Ethics: The Ethical and Social Implications of Robotics, edited by Patrick Lin, Keith Abney, and George A. Bekey. 69–84. London and Massachusetts: The MIT Press.  

See more

In a historic moment for AI, computers gain ability to generalize learning between activities (ExtremeTech)
Artificial Intelligence: Google DeepMind Now One Step Closer to ‘One-Shot Learning’ (Nature)
Stephen Hawking: AI will be ‘either best or worst thing’ for humanity (The Guardian)

Please support our work
Comments:
    Share your thoughts:
    Reply to:
    Name: *
    Content: *
    Captcha: *
    Back to Top