Artificial Intelligence - Predictions and Ethics

Predictions and Ethics

Artificial Intelligence is a common topic in both science fiction and projections about the future of technology and society. The existence of an artificial intelligence that rivals human intelligence raises difficult ethical issues, and the potential power of the technology inspires both hopes and fears.

In fiction, Artificial Intelligence has appeared fulfilling many roles, including a servant (R2D2 in Star Wars), a law enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt. Commander Data in Star Trek: The Next Generation), a conqueror/overlord (The Matrix, Omnius), a dictator (With Folded Hands), a benevolent provider/de facto ruler (The Culture), an assassin (Terminator), a sentient race (Battlestar Galactica/Transformers/Mass Effect), an extension to human abilities (Ghost in the Shell) and the savior of the human race (R. Daneel Olivaw in Isaac Asimov's Robot series).

Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? The idea also appears in modern science fiction, including the films I Robot, Blade Runner and A.I.: Artificial Intelligence, in which humanoid machines have the ability to feel human emotions. This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future, although many critics believe that the discussion is premature. The subject is profoundly discussed in the 2010 documentary film Plug & Pray.

Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, and others argue that specialized artificial intelligence applications, robotics and other forms of automation will ultimately result in significant unemployment as machines begin to match and exceed the capability of workers to perform most routine and repetitive jobs. Ford predicts that many knowledge-based occupations—and in particular entry level jobs—will be increasingly susceptible to automation via expert systems, machine learning and other AI-enhanced applications. AI-based applications may also be used to amplify the capabilities of low-wage offshore workers, making it more feasible to outsource knowledge work.

Joseph Weizenbaum wrote that AI applications can not, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life.

Many futurists believe that artificial intelligence will ultimately transcend the limits of progress. Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "singularity".

Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the science-fiction series Dune.

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be friendly. He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by George Dyson in his book of the same name in 1998.

Read more about this topic:  Artificial Intelligence

Famous quotes containing the words predictions and/or ethics:

    The Brahmins say that in their books there are many predictions of times in which it will rain. But press those books as strongly as you can, you can not get out of them a drop of water. So you can not get out of all the books that contain the best precepts the smallest good deed.
    Leo Tolstoy (1828–1910)

    Ethics and religion differ herein; that the one is the system of human duties commencing from man; the other, from God. Religion includes the personality of God; Ethics does not.
    Ralph Waldo Emerson (1803–1882)