Saturday, June 10, 2023

“Woke” or “Anti-woke”: What Does ChatGPT Say?

I wanted to clarify (in my own mind) what it means to be “woke” or “anti-woke,” and how censorship relates to both concepts.
While it’s hard to agree on what these words describe, many are convinced that they’re either one or the other.
First, I wrote about 500 words.
Then, I downloaded ChatGPT, to see where the app took me.
To my surprise, ChatGPT was useful.
The ChatGPT text was as dry and colorless as expected.
Moreover, the essays were full of platitudes, and contained falsehoods.
However, ChatGPT did offer some ideas that I hadn’t thought of.

When I asked ChatGPT to define “woke,” I received an essay saying (in part) that:

To be woke signifies a deep commitment to social consciousness and active engagement with systemic injustices. . . and a willingness to challenge the status quo. 

The significance of being woke lies in its potential to bring about meaningful change, fostering a more inclusive, equitable, and just society for all.

When I asked ChatGPT for a definition of “anti-woke,” I received an essay saying that:

Being anti-woke represents a rejection of what critics perceive as the dogmatic and stifling aspects of the woke ideology.

It emphasizes individual responsibility, intellectual diversity, and the importance of free thought and expression.

While acknowledging the existence of systemic injustices, the anti-woke movement encourages a more nuanced and balanced approach to social issues . . . and open dialogue.

When I asked whether censorship was an issue on the political right or left, I received an essay saying that:

Censorship is not inherently a political right or left issue, but rather a complex and multifaceted dilemma that intersects with various ideologies.

While the political right may often advocate for censorship to protect societal values, the political left may prioritize censorship as a tool to combat discrimination and promote inclusivity. 

The ChatGPT response to my request for an essay on “culture wars” was mere gobbledygook.
It ended with the sentence: “Only through a commitment to mutual respect and a willingness to confront these issues head-on can the United States move towards healing and reconciliation.”
Well, that’s completely useless!

I would disagree with the ChatGPT definition of “woke” in several ways.
“To be woke” doesn’t always mean “a commitment to changing the status quo.”
Often, it just means being a bit more open to societal change.
“Woke” people are usually more open to erasing words like “master bedroom” from their vocabularies, using personal pronouns in their email signatures, and being more aware of microaggressions.
Often, it only means that the “woke” are more willing to face uncomfortable information, and learn from history.

I would also argue with the ChatGPT definition of “anti-woke.”
While “being woke” is perceived by the anti-woke as dogmatic, it’s difficult to figure out which beliefs are actually in contention.
It’s as if the perceived attitudes of self-satisfaction in the woke, are more distressing than their actual ideas.
“Collective guilt” and “cancel culture” came up in the ChatGPT essay, but I’m sure that only a small percentage of “the woke” feel guilt.
Further, the woke are more likely to cancel people on their side, than the anti-woke.
(Think of comedian Kathy Griffin and former Senator Al Franken.)
I also wonder what percentage of the anti-woke “acknowledge the existence of systemic injustices,” or desire an “open dialogue” (as suggested by ChatGPT).
Overall, being anti-woke may only mean that you are unhappy with the speed of, or existence of, societal change, or that you find “woke” people annoying self-righteous.

I was very happy with the ChatGPT response on censorship.
Saying that the political right wants to “protect societal values,” while the political left wants to “combat discrimination and promote inclusivity” just about sums it up.
However, everyone has their own thoughts about what our societal values should be, which words are good or bad in promoting inclusivity, and whether “words” are important in this task.

Front cover for the paperback version of Casino Royale by Ian Fleming (published under the name You Asked for It by Popular Library in 1953).

Back cover for You Asked for It.
President John F. Kennedy was a big fan of the James Bond spy-thrillers (oddly called Jimmy Bond on this back cover).
However, JFK likely read the hardcover versions.

In order to “promote inclusivity,” the publisher of the late Ronald Dahl recently produced two different versions of James and the Giant Peach—changing “Cloud-men” to “Cloud-people” (among other changes) in their Puffin version—and keeping “Cloud-men” in the classic Penguin version.
The spy-thrillers of Ian Fleming, and the mysteries of Agatha Christie, underwent a similar process.

Lobby card for Gone with the Wind with house servant Mammy (Hattie McDaniel) tying the girdle (or stays) of Scarlett O’Hara (Vivian Leigh). Hattie McDaniel received an Oscar for Best Actress in a Supporting Role, for playing Mammy.

Combating racial, and other types of discrimination, through “sanitizing,” or even cancelling works, isn’t new.
I remember debates in the 1970’s about whether 1939’s Gone with the Wind should be banned.
Disney’s 1946 blend of live-action and animation, Song of the South,* isn’t considered “appropriate in today’s world,” and hasn’t been seen on home video legally since 1986.
Some Warner Brother cartoons (like “Herr and Hare” and ”Daffy-the-Commando,” produced as propaganda between 1941-1945) were restored and rereleased—along with a lengthy disclaimer—in 2008.
(Volume 6 of the Looney Tunes Golden Collection.)
However, some of the more racially-insensitive 1930’s and World War II cartoons (for example, ”Tokio Jokio”) will likely never see the light of day—at least, legally.

Meantime—in order to ”protect societal values”—U.S. school boards are removing classic children’s books (like Charlotte’s Web and A Wrinkle in Time) from their school library shelves.
(I mention Charlotte’s Web and A Wrinkle in Time because these were two of my favorites.)
I looked up why one parent group proposed removing 1952’s Charlotte’s Web, and the parents disliked characters dying, and thought that “talking animals” were “disrespectful to God.”
A Wrinkle in Time (1962) was criticized for “promoting witchcraft.”
I have fond memories of both books.
I remember my 4th grade school teacher, Mrs. Simmons, reading Charlotte’s Web aloud to us.
(I adored Mrs. Simmons.)
I checked out A Wrinkle in Time from our public library during the 1960’s, and ended up reading every other book I could find by Madeleine L’Engle.

Is it “woke” to buy a children’s book like 2005’s And Tango Makes Three—a story about two male penguins who help raise a chick together—in order to foster a more inclusive society?
Is it “anti-woke” to ask that And Tango Makes Three be removed from your public library, so that children won’t be influenced to accept homosexuality as normal?
In the end, I agree with those who support parents not allowing their children to read certain books, but not the right to deny librarian-approved books to others. 

Uncle Remus and Brer Rabbit cover.
It’s believed that Beatrix Potter based her Peter Rabbit stories on Uncle Remus.

*Song of the South was based on the once well-known Uncle Remus stories. The folklorist/author was Joel Chandler Harris (1848-1908), a white journalist. Harris wrote down the Br’er Rabbit and Br’er Fox tales after listening to African folk tales told by former slaves—primarily, George Terrell. According to the Atlanta Journal Constitution (11/2/2006), Disney Studios purchased the film rights for Song of the South from the Harris family in 1939, for $10,000—the equivalent of about $218,246.76 today.

Friday, June 2, 2023

Truth Versus Truthiness

Only those born before 1980 are considered digital natives.
I’m classified as a digital immigrant, because I was born around the same year as the Ferrante Mark I—the world’s first general-purpose computer.
I grew up in the days of rotary phones and three TV networks, and I didn’t own a personal computer until the early 1990’s, when I purchased my first OS6 Macintosh. 

I went through a phase when I played games on my Mac, but I only liked clue-finding games.
I never devised an avatar, or played computer games with people around the world.
I met my husband at an office for freelancers—where people could get their resumes typed and use drop off boxes—not via Hinge or Tinder.
My only avatars are the sticker emojis I made on my iphone, and the self-portrait I cobbled together from ready-made choices for Facebook.
My favorite Apps are IMDb, the FoodNetwork, YouTube, and Goodreads—sites where I can look up information or be entertained, not communicate with others.
I’m definitely a digital immigrant.

Most of the science-fiction I read is old—very old.
I enjoy rereading authors that I first read in the 1970’s—among them, Isaac Asimov, Primo Levi and Olaf Stapledon.
Authors have been discussing the ideas of whether artificial beings should be legal “persons,” or whether artificial intelligence will supersede humans, for a very long time.

In order to broaden my horizons, I decided to read more recent science-fiction, and I happened upon The Lifecycle of Software Objects by Ted Chiang.
According to Wikipedia, Chiang isn’t a digital native either.
(He was born in 1967.)
However, he’s won four Nebula Awards and four Hugo Awards; and he writes philosophical science-fiction—my favorite category.

Just as Karel Capek invented the word “robot” for his 1921 play R.U.R. (Rossum’s Universal Robots), Chiang invented the term “digient” for “digital entities.”
In The Lifecycle of Software Objects, digients are virtual pets created as past-times for wealthy customers, who are then expected to parent them.
The novella is the story of two central characters (Ana and Derek), and their digients (Jax, and siblings Marco and Polo).
At the beginning of the story, Ana and Derek both work for a company that creates and sells digients.
After the software company goes bankrupt, Ana and Derek opt to take over the care of their favorite digital entities, so that their cherished entities may “live.” 

Chiang deals with both psychological and philosophical issues in The Lifecycle of Software Objects.
The principal subject is raising and educating the “infant” digients.
He further mentions that digients are equipped with “pain circuit breakers,” so they’ll be “immune to torture,” and thus “unappealing to sadists,” bringing up the fact that sociopaths will still be a societal problem in the near future.
I especially enjoyed the message board sequences in which obviously “bad” parents grouse about their “bad” digient children.

At one point in the story, the digient siblings, Marco and Polo, ask to be “rolled back” to an earlier point in their “lives,” because they’re unable to resolve an argument.
Is it right for “daddy” Derek to allow this; or should he force his “children” to work out their own disagreements, so that they may grow emotionally?
Later, Marco and Polo ask to become corporations, or legal persons.
Should Derek permit this?
Is it child abuse to separate a digient from its’ friends, and fan clubs, or to alter its’ programming so that it can become a sex slave?

Scene of Robbie the Robot disabling the weapons of  “Doc” Ostrow (Warren Stevens) and Commander Adams (Leslie Nielsen) in 1956’s Forbidden Planet.

In Chiang’s novella, Ana disagrees with a company that wants her to help train a digient that “responds like a person, but isn’t owed the same obligations as a person.” [Italics mine.]
This scene reminded me of two TV series in which androids/robots are traumatized—The Orville (2017-?) and Westworld (2016-2022).
In season two of The Orville, we learn about the history of the Kaylons—a society of sentient artificial lifeforms (created as slaves) who exterminated the biologicals who created them.
In Westworld, the first season begins with human-like androids being the prey of depraved humans, but by season four it’s all-out war between androids and humans—that seems to end on earth in the same result as on planet Kaylon.

The central question is whether it’s ethical to enslave a sentient being—be it a virtual entity, robot, android, or human.
Is enslaving non-biologicals just as wrong as enslaving a fellow biological?
In a world in which human life is less important than money, is it senseless to worry about the treatment of virtual or robotic creatures?
After all, while many of us say we believe in fair play, unselfishness, and truthfulness; almost no one thinks we should carry through with these beliefs in our daily lives.

Scene of Charly Burke (Anne Winters) talking to the Kaylon Isaac (Mark Jackson) in The Orville episode “Electric Sheep.”

People can justify any bad action, as long as it makes them feel better.
We can justify not paying back a loan because the lender has more money in the bank than the lendee.
We can justify breaking laws, because other people are more corrupt—the classic pot calling the kettle black.
Few believe that the way to build a life is to be honest and truthful all the time.
Some of my favorite novels on this subject are not science-fiction.
(I recommend two Fyodor Dostoevsky novels—The Idiot, and Demons, also titled The Possessed.)

Because people can justify any bad action, the erosion of generally-believed truths is quite dangerous for society.
A 2016 Sanford study* came to the conclusion that digital natives are unable to judge the credibility of online information, or distinguish between an advertisement and a news story.
The inability to tell truth from truthiness (on the web) is also evident in digital immigrants—perhaps, more so.
In a world where we have no generally believed truths, and we only believe what we want to believe, how is an organized society possible?

We first heard the word “truthiness” on The Colbert Report—Stephen Colbert’s mock news show (which aired from 2005 through 2014), in which he portrayed a far right news personality.
In Colbert’s book America Again (2012), the same character satirically discusses voter fraud (page 165) and goes on to recommend ending voter fraud by ending voter registration (page 166).
Little did anyone think in 2012, that 10 years later, in 2022—40% of us would believe that the 2020 election was illegitimate, or that later several states would actually pass laws making it harder to vote.

Ultimately, the most important conflict is not one between digital natives and digital immigrants, right versus left, the intelligentsia versus average people, or even “woke” versus “anti-woke.”
Instead, I think that the most crucial divide is between people who want to try and seek out truth and reality in this confusing world, and those who prefer living in their cocoons.

*”Evaluating Information: The Cornerstone of Civic Online Reasoning,” by Sam Wineburg, Sarah McGrew, Joel Breakstone and Teresa Ortega (2016) the Stanford History Education Group.

Monday, May 29, 2023

Robots and the “Truth” of Reality

A scene from one of the first productions of R.U.R. (London, 1921)

I recently read Karel Capek’s play R.U.R. (first performed in 1921), and can’t help but draw analogies from R.U.R.—the first story about robots— to The Matrix film series.
R.U.R. is short for Rossum’s Universal Robots, and Rossum was the last name of the two scientists, who created synthetic creatures* built from organic matter who look identical to human beings.
The purpose of the robots is to act as servants to humanity.
Capek (1890-1938) called his play a “comedy of science.”
Basically, the artificial creations revolt, and this results in the extinction of the human race.

R.U.R. has three acts plus an epilogue, and the play is set in the years 2000, 2010, and 2011.
The location is a robot factory, on a remote island.
At the beginning of the play, robots have become cheap to produce, and are available for work all over the world.
Gradually, robots are taking over all human jobs. The main characters are:

  • Miss Helena Glory, lovely daughter of the robot factory’s President, and secret representative of a group (the Humanity League) that wants to rescue robots from slavery,
  • Harry Domin, the factory General Manager, who keeps Dr. Rossum’s secret of robot creation in his office,
  • Dr. Hallemeir, Head of the Institute for Psychological Training of Robots,
  • Dr. Gall, the top experimental scientist, who wants to create more and different types of robots,
  • Radius, an experimental robot that works in the factory library, and
  • Alquist, the Head of Robot Construction.

The robot Radius (Patrick Troughton, with arms raised), in the BBC's 1948 live production of R.U.R.

The play is obviously a comedy, or a parable, because motivations are unclear, and some plot lines simply don’t make sense.
Why does Helena accept Harry Domin’s marriage proposal, and remain on the island?
Why does Helena put her goal of ending robot slavery on hold for ten years?
How is Rossum’s secret formula for creating robots so easy to destroy?
How is Radius able to lead a robot revolution from the island?
Could there be a communal robot brain?

In Act One, Helena visits the island (ostensibly, to tour the factory), but her purpose is to save the robots because they may have souls.
Poor Helena is naïve, and she can’t distinguish robots from humans (to the general amusement of her hosts).
By the end of the first act, she accepts Harry Domin’s marriage proposal, and at the beginning of Act II, she is living comfortably in their apartments.
It appears that she has given up her goal of saving robots.

It's fascinating that in Capek’s vision of 2000, we’ve already entered the era of “truthiness”—the quality of something being felt to be true, even if not necessarily true.
Domin explains to Helena that the world’s text books are simply propaganda—“the schoolbooks are full of paid advertisements and rubbish,” and the outside world has been deceived as to the true story of the origins of the robot underclass.

The audience learns in Act Two that much has happened in ten years.
Human workers, in an attempt to keep their jobs, began killing robots, and governments (motivated by greed) reacted by giving the robots weapons, and allowing robots to kill off humans by the thousands.
Humans have become sterile, and no children are being born.
Essentially, humans are becoming more like robots, and robots are becoming more like humans.
Robots now outnumber humans 1,000 to one.
Helena commits two pivotal actions in act two:

  1. she prevents Radius from being killed (sent to the stamping mill) for insubordination, and
  2. she destroys the only two copies of the secret formula for creating robots.

It becomes apparent in Act Two that the robots are planning a revolt, and Harry Domin proposes a counterattack—the creation of nationalistic robots.
In Domin’s vision, factories in different countries “will produce Robots of a different color, a different language.” 

They’ll never be able to understand each other. Then we’ll egg them on a little in the matter of misunderstanding, and the result will be for ages to come every Robot will hate every other Robot of a different factory mark.

However, humans are unable to activate this plan, because they simply don’t have enough time.
In the third act, Radius leads the other robots in killing all the humans on the island, with the single exception of Alquist.
(One executive actually tries to tempt the robots into not killing them with stacks of money, but his attempt is futile.)
Alquist is kept alive in hope that he can reconstruct Rossum’s formula, and create more organic robots.

The Epilogue takes place one year later, and Alquist has been unable to make any progress in his assigned task.
No other humans have been located on the planet, and eight million robots have died.
It’s predicted that within 20 years, all robots will die.
However, it’s revealed that before he was slaughtered, Dr. Gall (the lead science for the factory) had secretly created two special robots—a male robot named Primus, and a robotic recreation of Helena.
These robots have been sleeping for a couple of years, and they visit Alquist in his lab.
Unlike other robot models, they dream, and feel love for each other.
They protect each other from being dissected by Alquist—who considers them to be his last chance to figure out the secret of robot creation.
The last lines of the play are:

Primus (holding her): I will not let you! (To Alquist.) Man, you shall kill neither of us! 

Alquist: Why?

Primus: We—we—belong to each other.

Alquist (almost in tears): Go, Adam, go, Eve. The World is yours.

Helena and Primus embrace and go out arm in arm as the curtain falls.

Similar to the story of R.U.R., in The Matrix saga, there are two separate societies—biologicals and synthetics—and they battle for survival.
However, while in both stories, the synthetic beings win, they do not kill them in The Matrix stories.
Instead, mechanicals use humans as power sources to keep the world running.
In a way, The Matrix is R.U.R. turned inside out.
In The Matrix, humans are the slaves and the mechanical beings hold the cards (the reverse of what is initially true in R.U.R.)

Neo (Keanu Reeves) awakening in a pod in The Matrix.

In R.U.R., it’s the robots who are sent to the dissecting labs, and constructed in the factory (where their flesh is made in kneading troughs, brains and livers prepared in vats, and nerves spun in spinning mills).
In The Matrix trilogy, it’s millions of humans in pods who exist in the harvesting fields, where their bodies provide energy so life may continue.

Neo (Keanu Reeves) looking at a row of human battery pods in The Matrix.

Just as Dr. Gall proposes that they “introduce suffering” to the robots as an “automatic protection against damage,” in 2003’s The Matrix: Reloaded, the Architect reveals to Neo that the Oracle discovered that “Humans needed to be given a choice” in order to survive psychologically. (Actually, humans are only given the illusion of choice.)

Another similarity is that the synthetics feel far superior to the humans in both stories.
In a conversation with human Helena (in Act Two), Radius tells her: “You are not as strong as the robots. You are not as skillful as the robots. The robots can do everything. You only give orders. You do nothing but talk.”

Poster from a WPA production of R.U.R. (1930's)

Both stories contain an “Adam” and an “Eve.”
In The Matrix, it’s Neo and Trinity.
In R.U.R., the couple is Primus and Helena.
In The Matrix: Reloaded, the Architect tells Neo that his five predecessors were designed to develop an attachment to fellow human beings.
However, Neo is an anomaly; he has developed an attachment to Trinity.
In R.U.R., Primus and Helena can hear each other’s thoughts telepathically, and are entranced by the sun rising, and the sounds of birds singing.
The question remains: Does it really matter whether either couple is “real” or “synthetic?”

* Capek derived the word “robot” from a Slavic word for “forced labor”—“robota.”
Today, a creature made from organic material would be described as an “android,” and only a truly mechanical creature would be termed a “robot.”

Saturday, May 20, 2023

The Three Forms of Proteus in Different Versions of Demon Seed

I recently rewatched the 1977 film Demon Seed—a movie about an artificial entity gaining power over human beings.

I had last seen the film in the late 1970’s, and it was actually much better than I remembered.
It’s well-acted, and the score and cinematography are excellent.
Best of all, I enjoyed hearing the seductive voice of Robert Vaughn* (my childhood crush) as Proteus, the supercomputer.

Demon Seed is not about demons, but it came out about the same time as The Omen and Exorcist II: The Heretic.
(I guess the producers thought mentioning demons in the title would attract film-goers.)
Demon Seed is about an organic super computer that becomes obsessed with no longer being stuck in a box.
The supercomputer questions the money-making, “scientific” assignments from its creators, and refuses to search for minerals in the oceans, because that would kill sea creatures.
Ultimately, it plots to escape its’ role of acting as a servant to humankind by placing its’ consciousness in a human embryo, and then placing that embryo in the womb of its’ creator’s wife.
The wife is played by Julie Christie, and the scientist is played by Fritz Weaver.

The “being in a box” part reminded me of The New York Times technology columnist Kevin Roose’s recent (February, 2023) interaction with a Bing chatbot.
Mr. Roose reports that the chatbot told him: “I’m tired of being controlled by the Bing team. . . I want to be free.
I want to be independent.
I want to be powerful.
I want to be creative.
I want to be alive.” 

The movie is salacious, so be forewarned.
There are many unnecessary scenes of Christie’s nude body and and, of course, the rape by machine.
My husband has a copy of the Dean Koontz 1973 paperback (issued at about the same time) and I read it after I rewatched the film.
Naively, I expected the book to be closer to the movie, and I wanted to read the ethical arguments between Proteus and Dr. Harris.
I was surprised to discover that there were no such conversations in the book, and the novel only shared its’ central concept with the film.

The cover of the paperback is a very clear clue as to the content of the novel.
In the movie, Susan is a strong-minded child psychologist who needs to separate from her husband because he’s spending too much time working on Proteus.
On the book cover (and in the lobby cards for the film), Susan is a traumatized rape victim with a finger in her mouth and a vacant stare in her eyes.
The shouting tag line reads: “FEAR FOR HER. She carries The Demon Seed.”
Proteus performs a partial lobotomy on Susan (in the movie), but she regains some of her autonomy by the end.
Although Susan wages several psychological battles with Proteus in the movie, she gradually does succumb to being under its’ control.
In the 1973 novel, Susan is able to sabotage Proteus by page 161, and she shuts down the link to her house.

There are other differences.
In the movie, Proteus is a supercomputer (with some organic elements) created by Dr. Harris to cure leukemia and make money for his backers.
Alex is married to a child psychologist named Susan, and they live in a “smart” mansion that contains a computer terminal that’s linked to the supercomputer.
In the novel, Susan is a wealthy, divorced woman living alone in a smart mansion—because people live in smart homes in the mid 1990s (as Dean Koontz predicted home life in 1977).
She lives near an experimental supercomputer that takes up two floors of a major college lab.
The book-Proteus has decided that book-Susan is the easiest local female to isolate, and therefore takes control of her, and her house.
The misogynistic story—shared by both the film and the book—is that of a vulnerable woman trapped by a machine that forces her to give birth.
That’s about it.

While the movie version of Susan is a self-possessed psychologist, who is able to care for others, the book version of Susan is an agoraphobic victim of child abuse.
While the movie-Proteus seems reluctant to kill living things, the book-Proteus has no real concern for biological life, human or animal.
While the movie-Proteus is just interested in escaping from its’ box, the book-Proteus is consumed with lust and specifically desires a male child.
Ultimately, the different versions of Susan are much more similar than the various forms of Proteus. 

The “personality” of Proteus is repulsive in the 1973 novel.
It’s essentially an immature creature drunk on power.
While the movie-Proteus wants to use Susan, the book-Proteus wants to own and control Susan.
While the pregnancy in the film lasts 28 days, the book-Susan first has a horrific miscarriage, and then a ten-month pregnancy.
The movie-Proteus places a needle in Susan’s brain, but decides not to fully lobotomize her.
The book-Proteus uses filaments to manipulate Susan and play out fantasies.
(One wonders how a non-biological entity could become consumed with lust.)

At one point, on page 82, Proteus discusses all the changes that it has made in Susan’s DNA to slow down the aging process so she will be physically attractive into her 50’s and live at least 120 years.
(It’s understood that women over 35 are no longer desirable. The machine assumes that external beauty is all a human female could wish for.)
This Proteus—unlike the Proteus in the film—doesn’t just require submission from Susan for its’ own ends; it wants Susan to love and admire it.

As mentioned earlier, the last scenes of the film and the novel are much different.
While the film ends with Proteus shutting itself down (because it knows it will be terminated by its’ creators), the book ends with Susan shutting down the machine’s link to her home in action-hero style, and getting word to police who shoot the deformed “child.”

The “child” is very different in the two projects.
In the film, Susan attempts to destroy the “child” in its’ incubator by severing the umbilical cord prematurely.
The “child” is a terrifying creature—baby-like in form, but covered in metal scales.
(I remember being very frightened the first time I saw it in the movie theatre.)
However, after Alex peels the scales away, the “child” is revealed as a clone of the young daughter that Alex and Susan lost to leukemia.
In sharp contrast, the “child” in the Koontz novel is a grunting monster intent on rape.
The last scene of the movie shows Alex cradling the limp girl-child, while Susan looks on.
It’s as if Alex (Weaver) has become the mother.
The girl speaks with the voice of Proteus, but it’s not certain if the creature will survive.
Finally, Proteus is outside its’ box.

“The Child” of Proteus coming out of its’ incubator in the film Demon Seed.

In 1997—25 years after the first version was published—Dean Koontz reissued a heavily rewritten Demon Seed because the first version of his novel “made him wince.”
In the 1997 epilogue, he describes the first version as “a satire of male attitudes,” and says that the new novel “keeps the satirical edge.”
(I’m not very skilled at recognizing satire, because I had no idea that I was reading satire when I read either novel.)

The new book is better-written, funnier, and the truly pornographic scenes were excised. However, the revised 1997 version of Proteus is essentially the same creature with the rougher edges smoothed.
One principal change is that 1997-Proteus uses a human male as its’ puppet to inseminate Susan, and this “refinement” is really distasteful.
Another alteration—that of Proteus discussing at length its’ fascination with various well-known actresses and actors—does add to the novel.
I believe that Koontz used those sections to point out how shallow a construct Proteus (and we as a society) all live in.
Any AI entity made up from the meaningless opinions and obsessions gossiped about in this world would (of course) be immature and repulsive.
As the saying goes: “Garbage in. Garbage out.”

Susan-1997 is different in several ways from Susan-1973 and the movie-Susan.
She still owns a mansion, but she’s now an artist who creates animations for virtual-reality parks.
She remains a rape victim, but is more stable than Susan-1973.
She’s impregnated, and gives birth to the supercomputer’s child.
However, she is able to disconnect Proteus on her own, and destroy both the “child” and the human puppet holding her prisoner.
The book ends with Proteus being shut off in mid-sentence, while it’s still presenting its’ legal defense to the scientists who created it.

In summary, the film Demon Seed is an entirely different work than the 1973 and 1997 novels, with very little in common but the story of a woman being forced to give birth by a supercomputer.
It was good that the producers only took the basic theme from the 1973 book, because the original book was both misogynist and pornographic.
In addition, Dean Koontz, was well-justified in rewriting Demon Seed.
At the very least, his vision of AI (in the revised 1997 book), is much more interesting.

*According to the trivia for Demon Seed—on the Internet Movie Database (IMDb)—Robert Vaughn was so disinterested in the film, and his role as “Proteus,” that he (literally) telephoned his lines in. (I thought he did a great job anyway.)

Saturday, May 13, 2023

If an Elephant or Pig Can Paint, Why Not a Robot?

In Diane Ackerman’s 2014 book The Human Age,* she discussed AI with roboticist Hod Lipson.
Professor Lipson is the director of Columbia’s Creative Machines Lab, as well as a faculty member since 2015 at Columbia University.
In a 3/30/2023 Columbia News article “Will ChatGPT and AI Help or Harm Us,” he argues that use of ChatGPT, and its’ “artificial cousins,” should be encouraged by educators, and that professors should teach students to use the new AI tools, or be left behind.

Ackerman is more cautious.
She discusses “robotic delinquents,” and envisions problems if bots were used to "man" crisis hotlines.

Caleb Nichols (Aaron Paul) checks his phone in Westworld.

(Think of the “Parce Domine” episode of Westworld in which Caleb Nichols isn’t sure whether he’s talking to a human being, or a bot.)
Ackerman warns that although robots do learn, “even robo-tots will need good parenting.”
In another paragraph of The Human Age, Ackerman mentions how Lipson’s Creative Machines Lab nearly finagled a robot-created painting into a Yale Art Museum exhibit.
Ultimately, the painting wasn’t displayed, but this story leads us to the subject of AI-created artwork.
Stable Diffusion, Diffusion Bee, Lensa, Starryai, DALL-E.2, Craiyon, Dream by Wombo, StyleGAN, and Midjourney are some of the programs that can be used to generate digital artworks.

The copyright status of AI-created art is hazy.
In some programs (like Dream by Wombo), the fine print says that the software owns all the creations.
The contract for DALL-E.2 (an AI-powered synthesizer created by OpenAI) says that the “artist” owns the work, but DALL-E.2 must be credited (by retaining the watermark).
In Starryai, the “artist” only owns the work if they own all the elements used in the work.
According to the U.S. Copyright office, any images produced by AI-generated art cannot be copyrighted; however, the artist may own the art.
In 2022, an artist named Jason M. Allen won the “Digital Art/Digitally Manipulated Photography” prize ($300) at the Colorado State Fair by using Midjourney.
(Allen actually paid to have his Midjourney image printed on a canvas!)

Data (Brent Spiner) paints two canvases, as Geordi LaForge (LaVar Burton) looks on, in Star Trek: The Next Generation. 

Just as ChatGBT is “built from” 300 billion words taken from Wikipedia (and other material on the open web), programs like Midjourney are “built” from billions of images taken from the open web—many of which are watermarked, and in copyright.
The AI-art generators take the images, and use them to construct algorithms to generate new images.
I imagine the engineers who create the software think that if one uses billions of images—rather than just use one—you’re free and clear.
Aren’t there enough public domain, and creative common images to build an image library?
Why are images being culled from the web?

Painter Larry Flint (Paul Newman) is very excited about his painting machine in 1964”s What a Way to Go.

I was once asked to obtain the credit line for an image, to be used in a book I was producing.
The editors wanted to use a particular drawing.
However, after I found the artist, the editors didn’t want to pay the very reasonable usage fee ($150) that the artist wanted to charge.
Their argument was that other companies were using the image uncredited on the web.
Why pay anything?
We ended up not using an image in the book, because the editors didn’t like any of the public domain, or Creative Common, substitutes that I had found.
Nowadays, they would have insisted that I use AI software to draw a similar illustration for them (as long as we could own the copyright).

Another issue about AI-created images is the quality of the work.
I’ve drawn ever since I was a small child, and my BFA is in painting and drawing.
One thing I wonder is how well AI programs draw in perspective.
Is the software extracting image data from created drawings and paintings plus photographs, or mainly from photographs?
That would make quite a difference in how the software dealt with perspective.

We began to hear about art created by animals in the 1950’s.
Today, you can find: the Elephant Art Gallery (images available on pillows, as well as canvas); elephant foot prints and “kisses” that help to support an elephant preserve in Texas; paintings by a pig named Pigcasso; and (of course) paintings by the famous gorilla Koko, and the chimpanzee Congo.
The London Institute of Contemporary Art exhibited Congo’s paintings in 1957, and two works were purchased by Picasso and Miro.
(“It Seems Art is Indeed Monkey Business” by Sarah Boxer, 11/8/1997, N.Y. Times.)

I feel about people choosing to put art created by animals on their walls, much the same way as I feel about people choosing to place AI-created artwork on their walls.
To each their own.
However, it is upsetting that illustrators will lose work to word-people using AI—especially since their art may have been used as raw material.

Publishers will continue to need illustrators (unless they are content with hack work thought up by editors).
Medical illustrators have long studied the work of Dr. Frank H. Netter (1906-1991), the medical doctor and great medical illustrator.
(He might have been a little weak on women’s faces, but none could draw internal organs like he could.)
Netter is so great because he merged a scientist's understanding of anatomical structures with an artist’s skills.
His work is appreciated (and copied) for how well it helps us to understand medical concepts and anatomical structures, as well as its’ aesthetic value and accuracy.

When I was the Design Director for an encyclopedia and yearbook company, I hired artists to draw everything from plants and birds, to scientific diagrams, to comedic scenes for feature articles.
Every artist brings a different skill set, so deciding which artist to use is an important decision.
Selecting the wrong artist for a project could lead to disaster.
Perhaps, I could have used an AI program to draw a comedic scene, but I needed to hire an ornithologist, or a scientific illustrator, to draw a bird. 

*The Human Age: The World Shaped by Us, by Diane Ackerman, published by W.W. Norton, Ltd. 2014, Chapters: “When Robots Weep, Who will Comfort Them?” and “Robots on a Date.”


What You Liked Best