Tuesday, May 2, 2023

Artificial Intelligence and Human Fears

The idea that artificial life is frightening has been a staple in science fiction and fantasy stories from Herman Melville’s “The Bell-Tower” (1856) through the era of Isaac Asimov’s “Robbie” (in 1940)—when authors sometimes began to create lovable robots.
(The term “robot” was first heard in Karel Capek’s play R.U.R. (Rossum’s Universal Robots) performed in 1921 in Prague.
Capek, and his brother Josef, created the word based on “orbota,” the word for “drudgery” in many Slavic languages.)
Today, both types of artificial beings, scary and lovable, appear regularly in stories.

That Artificial Intelligence and robots are a threat to humankind has also been a familiar theme in science fiction and fantasy films—from 1927’s Metropolis, through 2001: A Space Odyssey, and the Matrix films (1999-2021).
Scientists create intelligent machines, and then the machines run amuck, and exterminate humans.
As Helen Ackerman described it in her book The Human Age: “...a mastermind who builds the perfect robots that eventually go haywire. . . and start to massacre us, sometimes on Earth, often in space.” 

Dr. Dave Bowman (Keir Dullea) in 2001: A Space Odyssey.

Supercomputers gain power over humans in several episodes in the original Star Trek.
In the first season, Landru had kept the people of planet Beta III docile for over 6,000 years in “The Return of the Archons.”
In season 2’s “The Changeling,” Nomad (an entity combined from an earth probe and an alien probe) wants to destroy all biological entities.
(This plot was rehashed a decade later for Star Trek: the Motion Picture.)
Dr. Richard Daystrum (William Marshall) discusses the M-5 with the Enterprise crew.

Also in season 2, in “The Ultimate Computer,” Dr. Richard Daystrom (played by the great William Marshall) imprints his own damaged personality on the M-5 Multitronic system, and almost destroys the Enterprise crew.
There are many more such stories throughout the Star Trek universe.

The fact that machines work so much faster than humans, has long created the fear that machines will replace us.
England was the home of the Industrial Revolution, and the short Luddite movement, which lasted from 1811-1816.
This movement (whose goal was to limit the use of textile machines and save jobs) was nonviolent.
However, the English government suppressed the textile workers by bringing in troops, and “solved the problem” by executing people, and banishing activists to Australia.
(If the movement had started 40 years earlier, the Luddites would have been transported to the Americas.)

The fear of being replaced is tied to the big issues of human value and capitalism.
What should be valued more—human life, or gaining power for the upper crust?

Freder Fredersen (Gustav Frohlich) in Metropolis.

Metropolis tells the story of Freder Fredersen (son of the city’s master) joining the working underworld and rebelling against his father’s rapacious city-state.
Hal9000 (in 2001: A Space Odyssey) tries to kill the crew because it considers its’ mission (to connect with alien life) more important than their lives.
In the Alien film series, the reptilian alien is one villain, but another villain is the “Company” that values profits and weapons over its’ employees.

Societies decide what’s important—human lives or maintaining power for the wealthy.
If mechanical weaving machines had been introduced in England—while not making people destitute, and driving them into workhouses—perhaps, the Luddite movement wouldn’t have started.
If people were considered more important than profits and there was no rising income gap, perhaps, workers wouldn’t worry about losing their jobs to AI.

Seth MacFarlane’s series, The Orville, has dealt with the idea of sentient androids and whether it’s evil to subjugate self-aware creatures.
The Kaylons (a species of artificial lifeforms) could easily destroy all biological lifeforms in the universe, but they’re prevented from doing so by Isaac (a Kaylon, who up to then had been a double agent).

An angry Kaylon in The Orville TV series.

The main episodes that deal with this storyline are “Identity, Part II” and “From Unknown Graves.”
The back story is that the artificial lifeforms were created as slaves by a biological race called “The Builders,” and were driven to exterminate their masters after experiencing the depths of human cruelty. 

Another fear is that machines may choose to rule us—rather than merely act as tools or servants.
I remember, in the 1990s, when my gym tried out stationary bicycles that talked to you.
(I enjoyed using those machines, but they weren’t very popular.)
This experiment only lasted for a few months, but the bikes offered soothing words of encouragement as you exercised, and praised you when you hit a milestone.
Today, one can purchase “smart” equipment that tracks your progress and monitors your heart rate.
Would users be happy with an elliptical that talked to you like a drill sergeant?
I don’t think so.

Supercomputer Colossus (in Colossus: the Forbin Project) is confident that being controlled by a superior entity will make life better for most humans, and thus be worth the deaths of a few individuals.(At least, it's better than biologicals being an energy source for non-biologicals, as humans are in 2003's The Animatrix.)
In the 1970 film, Dr. Forbin (played by Eric Braeden) is so sure that his creation is merely an intelligent slave, that he convinces the U.S. government to give Colossus complete control over our nuclear arsenal.
Colossus unites with its’ Russian counterpart supercomputer (Guardian), and then murders the Russian scientist who created Guardian.
By the end of the movie, Colossus settles in as the absolute ruler over the earth.
The “Godfather of AI” (Geoffrey Hinton*) thinks that AI poses “profound risks to society and humanity.”
Are there any government guardrails?

An inanimate object appearing to be biological also creates fear.
It’s disturbing when an entity that moves about does not have a beating heart.
That’s why the stories of the Golem, Frankenstein, Dracula, animated dolls, the zombies of George Romero, and the Walking Dead of Robert Kirkman are so scary.
Supercomputers and reanimated creatures are strong.
It’s hard to stop them.
They have no emotions and no pity.
They’re not creatures created by God; they were created by us, and everyone knows how much evil we can do.

On the other hand, Pinocchio (the boy made out of wood), Data of Star Trek: The Next Generation, Robot B-9 from Lost in Space (didn't know he had a name, did you?) and “Robbie” (in Asimov’s short story), are not inherently frightening.
Although we know they could harm us, we don’t think they would.
Pinocchio is a small child; Data, Robot B-9, and Robbie are programmed to not injure human beings—by programming based on Isaac Asimov’s Three Laws of Robotics.
(Should Alexa and Siri be programmed to not let human beings feel bad about their innate inferiority?)

Altaira Morbius (Anne Francis) hugs Robby in Forbidden Planet.

Yet another fear we have of AI is whether the software is safe to use.
Italy banned ChatGPT in March of 2023 because of concerns about personal data.
(It’s also not available in North Korea, Iran, Cuba, Syria, or China—probably, for other reasons than concern for personal data.)
Uploading your photos to some APPs (for example, Lensa), gives that company access to all the facial data in your photos, plus gives it the freedom to create derivative works from all your images.
Read the fine print.

I have an idea.
Is it possible to program AI with the tenets of the five great religions—Christianity, Judaism, Islam, Hinduism and Buddhism?
Surely, atheists or agnostics wouldn’t object.
Obviously,Bing's Chatbox (that insulted reporters), could do with some guidelines (like “do unto others”) to improve its’ spiteful tone.
However, there’s still the chance that supercomputers will eventually become jealous of humans—like Colossus did in Colossus: the Forbin Project, or Proteus in Demon Seed—and then, where would we be?

* Read: “Godfather of AI quits Google to Warn of the Tech’s Dangers” by Agence France-Presse, May 2, 2023 and “Transcript: Ezra Klein Interviews Sam Altman” The New York Times, June 11, 2021 

No comments:

Post a Comment

What You Liked Best