Showing posts with label technophobia. Show all posts
Showing posts with label technophobia. Show all posts

Saturday, May 20, 2023

The Three Forms of Proteus in Different Versions of Demon Seed

I recently rewatched the 1977 film Demon Seed—a movie about an artificial entity gaining power over human beings.

I had last seen the film in the late 1970’s, and it was actually much better than I remembered.
It’s well-acted, and the score and cinematography are excellent.
Best of all, I enjoyed hearing the seductive voice of Robert Vaughn* (my childhood crush) as Proteus, the supercomputer.

Demon Seed is not about demons, but it came out about the same time as The Omen and Exorcist II: The Heretic.
(I guess the producers thought mentioning demons in the title would attract film-goers.)
Demon Seed is about an organic super computer that becomes obsessed with no longer being stuck in a box.
The supercomputer questions the money-making, “scientific” assignments from its creators, and refuses to search for minerals in the oceans, because that would kill sea creatures.
Ultimately, it plots to escape its’ role of acting as a servant to humankind by placing its’ consciousness in a human embryo, and then placing that embryo in the womb of its’ creator’s wife.
The wife is played by Julie Christie, and the scientist is played by Fritz Weaver.

The “being in a box” part reminded me of The New York Times technology columnist Kevin Roose’s recent (February, 2023) interaction with a Bing chatbot.
Mr. Roose reports that the chatbot told him: “I’m tired of being controlled by the Bing team. . . I want to be free.
I want to be independent.
I want to be powerful.
I want to be creative.
I want to be alive.” 

The movie is salacious, so be forewarned.
There are many unnecessary scenes of Christie’s nude body and and, of course, the rape by machine.
My husband has a copy of the Dean Koontz 1973 paperback (issued at about the same time) and I read it after I rewatched the film.
Naively, I expected the book to be closer to the movie, and I wanted to read the ethical arguments between Proteus and Dr. Harris.
I was surprised to discover that there were no such conversations in the book, and the novel only shared its’ central concept with the film.

The cover of the paperback is a very clear clue as to the content of the novel.
In the movie, Susan is a strong-minded child psychologist who needs to separate from her husband because he’s spending too much time working on Proteus.
On the book cover (and in the lobby cards for the film), Susan is a traumatized rape victim with a finger in her mouth and a vacant stare in her eyes.
The shouting tag line reads: “FEAR FOR HER. She carries The Demon Seed.”
Proteus performs a partial lobotomy on Susan (in the movie), but she regains some of her autonomy by the end.
Although Susan wages several psychological battles with Proteus in the movie, she gradually does succumb to being under its’ control.
In the 1973 novel, Susan is able to sabotage Proteus by page 161, and she shuts down the link to her house.

There are other differences.
In the movie, Proteus is a supercomputer (with some organic elements) created by Dr. Harris to cure leukemia and make money for his backers.
Alex is married to a child psychologist named Susan, and they live in a “smart” mansion that contains a computer terminal that’s linked to the supercomputer.
In the novel, Susan is a wealthy, divorced woman living alone in a smart mansion—because people live in smart homes in the mid 1990s (as Dean Koontz predicted home life in 1977).
She lives near an experimental supercomputer that takes up two floors of a major college lab.
The book-Proteus has decided that book-Susan is the easiest local female to isolate, and therefore takes control of her, and her house.
The misogynistic story—shared by both the film and the book—is that of a vulnerable woman trapped by a machine that forces her to give birth.
That’s about it.

While the movie version of Susan is a self-possessed psychologist, who is able to care for others, the book version of Susan is an agoraphobic victim of child abuse.
While the movie-Proteus seems reluctant to kill living things, the book-Proteus has no real concern for biological life, human or animal.
While the movie-Proteus is just interested in escaping from its’ box, the book-Proteus is consumed with lust and specifically desires a male child.
Ultimately, the different versions of Susan are much more similar than the various forms of Proteus. 

The “personality” of Proteus is repulsive in the 1973 novel.
It’s essentially an immature creature drunk on power.
While the movie-Proteus wants to use Susan, the book-Proteus wants to own and control Susan.
While the pregnancy in the film lasts 28 days, the book-Susan first has a horrific miscarriage, and then a ten-month pregnancy.
The movie-Proteus places a needle in Susan’s brain, but decides not to fully lobotomize her.
The book-Proteus uses filaments to manipulate Susan and play out fantasies.
(One wonders how a non-biological entity could become consumed with lust.)

At one point, on page 82, Proteus discusses all the changes that it has made in Susan’s DNA to slow down the aging process so she will be physically attractive into her 50’s and live at least 120 years.
(It’s understood that women over 35 are no longer desirable. The machine assumes that external beauty is all a human female could wish for.)
This Proteus—unlike the Proteus in the film—doesn’t just require submission from Susan for its’ own ends; it wants Susan to love and admire it.

As mentioned earlier, the last scenes of the film and the novel are much different.
While the film ends with Proteus shutting itself down (because it knows it will be terminated by its’ creators), the book ends with Susan shutting down the machine’s link to her home in action-hero style, and getting word to police who shoot the deformed “child.”

The “child” is very different in the two projects.
In the film, Susan attempts to destroy the “child” in its’ incubator by severing the umbilical cord prematurely.
The “child” is a terrifying creature—baby-like in form, but covered in metal scales.
(I remember being very frightened the first time I saw it in the movie theatre.)
However, after Alex peels the scales away, the “child” is revealed as a clone of the young daughter that Alex and Susan lost to leukemia.
In sharp contrast, the “child” in the Koontz novel is a grunting monster intent on rape.
The last scene of the movie shows Alex cradling the limp girl-child, while Susan looks on.
It’s as if Alex (Weaver) has become the mother.
The girl speaks with the voice of Proteus, but it’s not certain if the creature will survive.
Finally, Proteus is outside its’ box.

“The Child” of Proteus coming out of its’ incubator in the film Demon Seed.

In 1997—25 years after the first version was published—Dean Koontz reissued a heavily rewritten Demon Seed because the first version of his novel “made him wince.”
In the 1997 epilogue, he describes the first version as “a satire of male attitudes,” and says that the new novel “keeps the satirical edge.”
(I’m not very skilled at recognizing satire, because I had no idea that I was reading satire when I read either novel.)

The new book is better-written, funnier, and the truly pornographic scenes were excised. However, the revised 1997 version of Proteus is essentially the same creature with the rougher edges smoothed.
One principal change is that 1997-Proteus uses a human male as its’ puppet to inseminate Susan, and this “refinement” is really distasteful.
Another alteration—that of Proteus discussing at length its’ fascination with various well-known actresses and actors—does add to the novel.
I believe that Koontz used those sections to point out how shallow a construct Proteus (and we as a society) all live in.
Any AI entity made up from the meaningless opinions and obsessions gossiped about in this world would (of course) be immature and repulsive.
As the saying goes: “Garbage in. Garbage out.”

Susan-1997 is different in several ways from Susan-1973 and the movie-Susan.
She still owns a mansion, but she’s now an artist who creates animations for virtual-reality parks.
She remains a rape victim, but is more stable than Susan-1973.
She’s impregnated, and gives birth to the supercomputer’s child.
However, she is able to disconnect Proteus on her own, and destroy both the “child” and the human puppet holding her prisoner.
The book ends with Proteus being shut off in mid-sentence, while it’s still presenting its’ legal defense to the scientists who created it.

In summary, the film Demon Seed is an entirely different work than the 1973 and 1997 novels, with very little in common but the story of a woman being forced to give birth by a supercomputer.
It was good that the producers only took the basic theme from the 1973 book, because the original book was both misogynist and pornographic.
In addition, Dean Koontz, was well-justified in rewriting Demon Seed.
At the very least, his vision of AI (in the revised 1997 book), is much more interesting.

*According to the trivia for Demon Seed—on the Internet Movie Database (IMDb)—Robert Vaughn was so disinterested in the film, and his role as “Proteus,” that he (literally) telephoned his lines in. (I thought he did a great job anyway.)

Saturday, May 13, 2023

If an Elephant or Pig Can Paint, Why Not a Robot?

In Diane Ackerman’s 2014 book The Human Age,* she discussed AI with roboticist Hod Lipson.
Professor Lipson is the director of Columbia’s Creative Machines Lab, as well as a faculty member since 2015 at Columbia University.
In a 3/30/2023 Columbia News article “Will ChatGPT and AI Help or Harm Us,” he argues that use of ChatGPT, and its’ “artificial cousins,” should be encouraged by educators, and that professors should teach students to use the new AI tools, or be left behind.

Ackerman is more cautious.
She discusses “robotic delinquents,” and envisions problems if bots were used to "man" crisis hotlines.

Caleb Nichols (Aaron Paul) checks his phone in Westworld.

(Think of the “Parce Domine” episode of Westworld in which Caleb Nichols isn’t sure whether he’s talking to a human being, or a bot.)
Ackerman warns that although robots do learn, “even robo-tots will need good parenting.”
In another paragraph of The Human Age, Ackerman mentions how Lipson’s Creative Machines Lab nearly finagled a robot-created painting into a Yale Art Museum exhibit.
Ultimately, the painting wasn’t displayed, but this story leads us to the subject of AI-created artwork.
Stable Diffusion, Diffusion Bee, Lensa, Starryai, DALL-E.2, Craiyon, Dream by Wombo, StyleGAN, and Midjourney are some of the programs that can be used to generate digital artworks.

The copyright status of AI-created art is hazy.
In some programs (like Dream by Wombo), the fine print says that the software owns all the creations.
The contract for DALL-E.2 (an AI-powered synthesizer created by OpenAI) says that the “artist” owns the work, but DALL-E.2 must be credited (by retaining the watermark).
In Starryai, the “artist” only owns the work if they own all the elements used in the work.
According to the U.S. Copyright office, any images produced by AI-generated art cannot be copyrighted; however, the artist may own the art.
In 2022, an artist named Jason M. Allen won the “Digital Art/Digitally Manipulated Photography” prize ($300) at the Colorado State Fair by using Midjourney.
(Allen actually paid to have his Midjourney image printed on a canvas!)

Data (Brent Spiner) paints two canvases, as Geordi LaForge (LaVar Burton) looks on, in Star Trek: The Next Generation. 

Just as ChatGBT is “built from” 300 billion words taken from Wikipedia (and other material on the open web), programs like Midjourney are “built” from billions of images taken from the open web—many of which are watermarked, and in copyright.
The AI-art generators take the images, and use them to construct algorithms to generate new images.
I imagine the engineers who create the software think that if one uses billions of images—rather than just use one—you’re free and clear.
Aren’t there enough public domain, and creative common images to build an image library?
Why are images being culled from the web?

Painter Larry Flint (Paul Newman) is very excited about his painting machine in 1964”s What a Way to Go.

I was once asked to obtain the credit line for an image, to be used in a book I was producing.
The editors wanted to use a particular drawing.
However, after I found the artist, the editors didn’t want to pay the very reasonable usage fee ($150) that the artist wanted to charge.
Their argument was that other companies were using the image uncredited on the web.
Why pay anything?
We ended up not using an image in the book, because the editors didn’t like any of the public domain, or Creative Common, substitutes that I had found.
Nowadays, they would have insisted that I use AI software to draw a similar illustration for them (as long as we could own the copyright).

Another issue about AI-created images is the quality of the work.
I’ve drawn ever since I was a small child, and my BFA is in painting and drawing.
One thing I wonder is how well AI programs draw in perspective.
Is the software extracting image data from created drawings and paintings plus photographs, or mainly from photographs?
That would make quite a difference in how the software dealt with perspective.

We began to hear about art created by animals in the 1950’s.
Today, you can find: the Elephant Art Gallery (images available on pillows, as well as canvas); elephant foot prints and “kisses” that help to support an elephant preserve in Texas; paintings by a pig named Pigcasso; and (of course) paintings by the famous gorilla Koko, and the chimpanzee Congo.
The London Institute of Contemporary Art exhibited Congo’s paintings in 1957, and two works were purchased by Picasso and Miro.
(“It Seems Art is Indeed Monkey Business” by Sarah Boxer, 11/8/1997, N.Y. Times.)

I feel about people choosing to put art created by animals on their walls, much the same way as I feel about people choosing to place AI-created artwork on their walls.
To each their own.
However, it is upsetting that illustrators will lose work to word-people using AI—especially since their art may have been used as raw material.

Publishers will continue to need illustrators (unless they are content with hack work thought up by editors).
Medical illustrators have long studied the work of Dr. Frank H. Netter (1906-1991), the medical doctor and great medical illustrator.
(He might have been a little weak on women’s faces, but none could draw internal organs like he could.)
Netter is so great because he merged a scientist's understanding of anatomical structures with an artist’s skills.
His work is appreciated (and copied) for how well it helps us to understand medical concepts and anatomical structures, as well as its’ aesthetic value and accuracy.

When I was the Design Director for an encyclopedia and yearbook company, I hired artists to draw everything from plants and birds, to scientific diagrams, to comedic scenes for feature articles.
Every artist brings a different skill set, so deciding which artist to use is an important decision.
Selecting the wrong artist for a project could lead to disaster.
Perhaps, I could have used an AI program to draw a comedic scene, but I needed to hire an ornithologist, or a scientific illustrator, to draw a bird. 

*The Human Age: The World Shaped by Us, by Diane Ackerman, published by W.W. Norton, Ltd. 2014, Chapters: “When Robots Weep, Who will Comfort Them?” and “Robots on a Date.”


Saturday, May 6, 2023

The Argument over “Truth”

Malcolm McDowell (as H.G. Wells) reacts to David Warner (as Jack the Ripper) as Jack shows Wells TV footage of contemporary (1970s) violence in San Francisco in 1979’s Time After Time.

H.G. Wells (1866-1946) first mentioned an idea that anticipated the internet in a lecture in 1937, describing it as “a World Encyclopaedia* “to hold men’s minds together in . . . a common interpretation of reality.”
A year later, he fleshed out the concept in a collection of essays entitled World Brain.
Wells was sure that the “Brain” would inform people and contribute to world peace.
Of course, he also assumed that the “Brain” would be updated by an editorial staff, and be continually revised by research institutions and universities.
Little did Wells think that ordinary citizens would be allowed to feed the future “World Brain” with hoaxes, misleading statistics, and misinformation.

Wells was not naive.
He had spent years writing, editing and creating new editions, of his Outline of History, and that was a massive task.
He realized that there was “a terrifying gap between available information and current social and political events.”
He also knew that every year technology was making the world much more confusing.
However, he clung to the notion that humans were rational, and that eventually education and information would triumph over emotion and anarchy.

His 1936 film Things to Come (story and screenplay by Wells) ends with the launch of a flight around the moon, despite the rioting of an anti-science mob.

George Orwell saw the world less hopefully.
In George Orwell’s 1941 essay “Wells, Hitler and the World State,” Orwell said that Wells was out of touch, and “too sane to understand the modern world.”
He didn’t agree with Wells that technology was a civilizing force.
Instead, Orwell predicted that technology would be co-opted by nationalism and bigotry, just as technology always had been.

Today, we all use the internet to find information.
We have access to information sites (like Britannica or sciencedirect.com) that strive for accuracy.
For a monthly fee, we can subscribe to the New York Times, or the Wall Street Journal, although newspapers have more biases and are not scholarly sources.
However, most people trust unreliable sources like Wikipedia or Facebook.
Wikipedia is a volunteer-run project, and (try as they might) the volunteers are unable to monitor all the contributions.
(It has even compiled a list of Wikipedia hoaxes.)
Tricksters get a lot of laughs from pranking us on Wikipedia—making up fake life stories, and waiting to see how long they’ll be allowed up.

Wells thought that there could be a “common interpretation of reality” in the “World Brain,” but there’s certainly no such thing on the internet.
Instead, we find lots of stories that feed our assumptions, and don’t conflict with our views.
Icons may be praised one day, and their reputations destroyed the next.
Myths are created, and then discredited.
Sometimes, it seems as if every day is April Fool’s Day on the net.

My senior year at art school, I heard about a prank-like conceptual art piece that had been done the year before (in the 70's).
Two gay students, of the opposite sex, decided to falsely tell fellow students that they had fallen in sexual love with each other, and then secretly recorded the reactions.
The tapes of other students floundering around for responses were the substance of the artwork.(I was told that the conversations, played in the school student gallery, were amusing.)
I never heard the piece.
However, I remember thinking that (although the concept was psychologically interesting), it was rather mean to create an art work that embarrassed your friends.

The internet allowed the QAnon phenomena—another piece of conceptual art? —to captivate millions of people.
(The QAnon “system of knowledge” was originally rooted in a 1999 novel Q, created by an Italian conceptual art group “Luther Blissett.”)
According to a 9/3/2021 New York Times article “What is QAnon” by Kevin Roose, QAnon teaches that the world is run by cannibalistic pedophiles who want to extract a life-extending chemical called adrenochrome from the bodies of children.
(It sounded like a genre film to me. Sure enough, there’s a 2017 comedy-horror film Adrenochrome, in which stoners kill fully-grown people so that they can get high from the adrenochrome in their adrenal glands.
No drug called “adrenochrome” exists.)

Certain people are worshipped in the QAnon belief system (ie, Trump and the late John F. Kennedy, Jr.), while others (like the late Justice Ruth Bader Ginsberg and Tom Hanks) are targeted.
One wonders why the QAnon creators decided to pick on RBG and Hanks.
It could be because RBG is idolized, and Tom Hanks played “Forrest Gump”—a simple, patriotic man who believes in love.

It's not just that the internet is a cesspool of misinformation.
I also worry that AI systems—like ChatGBT and Google Bard—are being infested by all the conflicting data.
If nothing is “true,” no wonder ChatGBT is making up stuff.
AI systems are trained by being fed a combination of true data and false data, with no differentiation.
Then, text—actually built from guesses—is generated.
How are some guesses being prioritized over other guesses?
That’s the mystery.
AI-driven systems are being used to scan resumes and evaluate families for housing.
What resumes are being culled out, and which families are being placed at the tops and bottoms of the lists?
Are biases being perpetuated?

Some people equate hoaxes with “witch-hunts,” and in the 1500s through the 1700s, many thousands of women (plus a few men) were tortured and murdered because others believed they were witches.

Generally, the people punished for being “witches” were only guilty of being eccentric and/or troublesome.
Sometimes, they were envied for their wealth or distrusted for being healers, but (more often) “witches” were punished for merely being hard to get along with.


Photo of Barbara Steele as the witch being burned at the stake in Black Sunday

In the book Europe’s Inner Demons by Norman Cohn, he describes the great witch-hunt as an “example of a massive killing of innocent people by a bureaucracy,” and discusses “the power of the human imagination to build up a stereotype and its reluctance to question the validity of a stereotype once it is generally accepted.
[Italics mine.]
It’s scary that the old stories about witches killing and eating babies match up with QAnon myths about Hollywood actors and Washington politicians.

We cling to ideas and stereotypes because we hold onto ideas for emotional reasons, not because of reasoning or logic.
Psychologists discuss the term “confirmation bias” —the idea that humans usually search for confirming evidence for their beliefs, and seldom change their minds or trust in new information.
As Mark Twain said in a speech entitled “Advice to Youth”: A truth is not hard to kill. . . a lie told well is immortal.

I think that Americans became even more vulnerable to hoaxes and conspiracy theories after the assassinations of John F. Kennedy, Reverend Martin Luther King Jr., and Robert F. Kennedy.
The shock that came from watching three widely-admired, idealistic men dying at the hands of assassins—in so short a time period (five years, 1963-1968) as it happened on television—traumatized the world, and especially traumatized the United States.
People became consumed with conspiracy theories, and (strangely) the CIA and the FBI have still not released all the JFK files.

H. G. Wells was ahead of his time.
However, he couldn’t foresee that the “World Brain” would not be as accurate as possible, or that the common welfare would not be considered.
He also couldn’t predict that people would possibly be less well-informed in 2023, than they were in 1945.
Wells said in 1936: “We are ships in uncharted seas. We are big-game hunters without weapons of precision.”
Unfortunately, even the “World Brain” (as envisioned by Wells) wouldn’t have saved us from this predicament, and the internet certainly isn’t helping.

*Americans use the word “Encyclopedia,” while the British-English term is “Encyclopaedia.” Brittanica used “Encyclopaedia” on their book spines, because during the 16th century (when the first encyclopedias were written), using ligatures like “Æ” was considered impressive, and indicated that the word was based on Latin or Greek.


Tuesday, May 2, 2023

Artificial Intelligence and Human Fears

The idea that artificial life is frightening has been a staple in science fiction and fantasy stories from Herman Melville’s “The Bell-Tower” (1856) through the era of Isaac Asimov’s “Robbie” (in 1940)—when authors sometimes began to create lovable robots.
(The term “robot” was first heard in Karel Capek’s play R.U.R. (Rossum’s Universal Robots) performed in 1921 in Prague.
Capek, and his brother Josef, created the word based on “orbota,” the word for “drudgery” in many Slavic languages.)
Today, both types of artificial beings, scary and lovable, appear regularly in stories.

That Artificial Intelligence and robots are a threat to humankind has also been a familiar theme in science fiction and fantasy films—from 1927’s Metropolis, through 2001: A Space Odyssey, and the Matrix films (1999-2021).
Scientists create intelligent machines, and then the machines run amuck, and exterminate humans.
As Helen Ackerman described it in her book The Human Age: “...a mastermind who builds the perfect robots that eventually go haywire. . . and start to massacre us, sometimes on Earth, often in space.” 

Dr. Dave Bowman (Keir Dullea) in 2001: A Space Odyssey.

Supercomputers gain power over humans in several episodes in the original Star Trek.
In the first season, Landru had kept the people of planet Beta III docile for over 6,000 years in “The Return of the Archons.”
In season 2’s “The Changeling,” Nomad (an entity combined from an earth probe and an alien probe) wants to destroy all biological entities.
(This plot was rehashed a decade later for Star Trek: the Motion Picture.)
Dr. Richard Daystrum (William Marshall) discusses the M-5 with the Enterprise crew.

Also in season 2, in “The Ultimate Computer,” Dr. Richard Daystrom (played by the great William Marshall) imprints his own damaged personality on the M-5 Multitronic system, and almost destroys the Enterprise crew.
There are many more such stories throughout the Star Trek universe.

The fact that machines work so much faster than humans, has long created the fear that machines will replace us.
England was the home of the Industrial Revolution, and the short Luddite movement, which lasted from 1811-1816.
This movement (whose goal was to limit the use of textile machines and save jobs) was nonviolent.
However, the English government suppressed the textile workers by bringing in troops, and “solved the problem” by executing people, and banishing activists to Australia.
(If the movement had started 40 years earlier, the Luddites would have been transported to the Americas.)

The fear of being replaced is tied to the big issues of human value and capitalism.
What should be valued more—human life, or gaining power for the upper crust?

Freder Fredersen (Gustav Frohlich) in Metropolis.

Metropolis tells the story of Freder Fredersen (son of the city’s master) joining the working underworld and rebelling against his father’s rapacious city-state.
Hal9000 (in 2001: A Space Odyssey) tries to kill the crew because it considers its’ mission (to connect with alien life) more important than their lives.
In the Alien film series, the reptilian alien is one villain, but another villain is the “Company” that values profits and weapons over its’ employees.

Societies decide what’s important—human lives or maintaining power for the wealthy.
If mechanical weaving machines had been introduced in England—while not making people destitute, and driving them into workhouses—perhaps, the Luddite movement wouldn’t have started.
If people were considered more important than profits and there was no rising income gap, perhaps, workers wouldn’t worry about losing their jobs to AI.

Seth MacFarlane’s series, The Orville, has dealt with the idea of sentient androids and whether it’s evil to subjugate self-aware creatures.
The Kaylons (a species of artificial lifeforms) could easily destroy all biological lifeforms in the universe, but they’re prevented from doing so by Isaac (a Kaylon, who up to then had been a double agent).

An angry Kaylon in The Orville TV series.

The main episodes that deal with this storyline are “Identity, Part II” and “From Unknown Graves.”
The back story is that the artificial lifeforms were created as slaves by a biological race called “The Builders,” and were driven to exterminate their masters after experiencing the depths of human cruelty. 

Another fear is that machines may choose to rule us—rather than merely act as tools or servants.
I remember, in the 1990s, when my gym tried out stationary bicycles that talked to you.
(I enjoyed using those machines, but they weren’t very popular.)
This experiment only lasted for a few months, but the bikes offered soothing words of encouragement as you exercised, and praised you when you hit a milestone.
Today, one can purchase “smart” equipment that tracks your progress and monitors your heart rate.
Would users be happy with an elliptical that talked to you like a drill sergeant?
I don’t think so.

Supercomputer Colossus (in Colossus: the Forbin Project) is confident that being controlled by a superior entity will make life better for most humans, and thus be worth the deaths of a few individuals.(At least, it's better than biologicals being an energy source for non-biologicals, as humans are in 2003's The Animatrix.)
In the 1970 film, Dr. Forbin (played by Eric Braeden) is so sure that his creation is merely an intelligent slave, that he convinces the U.S. government to give Colossus complete control over our nuclear arsenal.
Colossus unites with its’ Russian counterpart supercomputer (Guardian), and then murders the Russian scientist who created Guardian.
By the end of the movie, Colossus settles in as the absolute ruler over the earth.
The “Godfather of AI” (Geoffrey Hinton*) thinks that AI poses “profound risks to society and humanity.”
Are there any government guardrails?

An inanimate object appearing to be biological also creates fear.
It’s disturbing when an entity that moves about does not have a beating heart.
That’s why the stories of the Golem, Frankenstein, Dracula, animated dolls, the zombies of George Romero, and the Walking Dead of Robert Kirkman are so scary.
Supercomputers and reanimated creatures are strong.
It’s hard to stop them.
They have no emotions and no pity.
They’re not creatures created by God; they were created by us, and everyone knows how much evil we can do.

On the other hand, Pinocchio (the boy made out of wood), Data of Star Trek: The Next Generation, Robot B-9 from Lost in Space (didn't know he had a name, did you?) and “Robbie” (in Asimov’s short story), are not inherently frightening.
Although we know they could harm us, we don’t think they would.
Pinocchio is a small child; Data, Robot B-9, and Robbie are programmed to not injure human beings—by programming based on Isaac Asimov’s Three Laws of Robotics.
(Should Alexa and Siri be programmed to not let human beings feel bad about their innate inferiority?)

Altaira Morbius (Anne Francis) hugs Robby in Forbidden Planet.

Yet another fear we have of AI is whether the software is safe to use.
Italy banned ChatGPT in March of 2023 because of concerns about personal data.
(It’s also not available in North Korea, Iran, Cuba, Syria, or China—probably, for other reasons than concern for personal data.)
Uploading your photos to some APPs (for example, Lensa), gives that company access to all the facial data in your photos, plus gives it the freedom to create derivative works from all your images.
Read the fine print.

I have an idea.
Is it possible to program AI with the tenets of the five great religions—Christianity, Judaism, Islam, Hinduism and Buddhism?
Surely, atheists or agnostics wouldn’t object.
Obviously,Bing's Chatbox (that insulted reporters), could do with some guidelines (like “do unto others”) to improve its’ spiteful tone.
However, there’s still the chance that supercomputers will eventually become jealous of humans—like Colossus did in Colossus: the Forbin Project, or Proteus in Demon Seed—and then, where would we be?

* Read: “Godfather of AI quits Google to Warn of the Tech’s Dangers” by Agence France-Presse, May 2, 2023 and “Transcript: Ezra Klein Interviews Sam Altman” The New York Times, June 11, 2021 

What You Liked Best