Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Friday, July 28, 2023

No Feeling for Human or Humanoid Dignity

Panels from “Space Falcon, Pirate of the Stratosphere” written and drawn by Harry Harrison.
In these panels, Falcon and Tubby imprison slavers Cassandra (and her associate), and rescue the half-dressed men who she has enslaved.

Currently, both the Writers Guild of America (WGA) and the Screen Actors Guild, plus the American Federation of Television and Radio Actors (SAG-AFTRA), are on strike.
Until the unions work out a deal with the Alliance of Motion Pictures and Television Producers (AMPTP) all TV and film productions, involving their members, will be halted.
Novelist and screenwriter, George R.R. Martin, recently said on Comic Book Resources (CBR.com) that a producer was quoted as saying that “AMPTP strategy is to stand fast until the writers start losing their homes and apartments.”
It looks as if this is going to be a long, long strike.

I recently retired from the world of design and print production, and I also paint and draw.
It seems to me that fine artists and book production people have some things in common with the writers and actors working in film and TV.
We’re all people who love what we do, more than we love making money.
Because we feel this way, we’re at a serious disadvantage in dealing with people “in charge,” whose sole business is generating money from our skill sets.

When I was a design and print production artist, I sometimes worked with managers who seemed almost resentful of artists.
Just like Alfred Hitchcock, who wanted his actors to be willing tools for his vision, these managers wanted artists to simply become their hands.
We’re at the threshold of editors using art-generating AI programs (like Midjourney and DALL-E.2, built from billions of images created by artists) to replace artists.
I remember dealing with several managers and editors who must love this development.
Now, by using AI, they can cut “prima donna” artists out of the illustration process completely!

Panels from “Captain Rocket,” written and drawn by Harry Harrison.
There’s a pattern of men being paralyzed, or held prisoner, in Harry Harrison stories.
Was Harrison subconsciously illustrating his position as a “wage slave?”

In 1964, Harry Harrison (1925-2012) wrote a science fiction short story “Portrait of the Artist,” that nicely describes just such a control-freak manager.
(Perhaps, the story is so perfect because Harrison was a centaur of sorts—an artist, and a writer.)
Note that the 1960’s were way before computer software was used for page composition.
(Programs that preceded InDesign weren’t in play until the 1980’s.)
The 60’s were the days of blue pencil, rubber cement, and India ink plus zip-a-tone on paper board.
In the future envisioned by Harrison, however, computers are drawing comic books, and have also taken over many service jobs.

In “Portrait of the Artist,” an experienced (read “older”) comic book illustrator named Pachs—who for years has used a Mark VIII Robot Comic Artist computer—is called into his manager’s office, and realizes that Martin is about to fire him. Martin says:

I’m going to have to let you go, Pachs. I’ve bought a Mark IX to cut expenses, and I already hired some kid to run it. . . You know I’m no bastard, Pachs, but business is business. And I’ll tell you what, this is only Tuesday, still I’m gonna pay you for the rest of the week. How’s that? And you can take off right now.

Pachs conceals his emotions, and leaves to get very drunk at a bar near the office.
(The bartender is an affable robot, with an Irish accent.)
I won’t spoil the tragic ending, but Harrison’s story concludes with Martin revealing his disrespect for Pachs as an artist, employee, and human being.

Sol Roth (Edward G. Robinson) undergoes euthanasia—first step in the process of becoming Soylent Green. His friend Detective Thorn (Charlton Heston) is in the window.
In Charles Platt’s interview book with science fiction icons
(Dream Makers Volume II), subject Harry Harrison tells Platt that his book Make Room! Make Room! was debased into the film Soylent Green.

After Harry Harrison fought in WWII, he returned to New York to study fine art.
He soon realized that it would be impossible to support himself as a fine artist, so he pivoted to comic book illustration and writing.
That was a good choice until the Comics Code hit, and publishers cut production by two-thirds.
By that time, Harrison had married and started a family, so he pivoted once again, to become a full-time independent author.
He’s best known for The Stainless Steel Rat book series, the DeathWorld book series, and the novel Make Room! Make Room! (that MGM purchased for Soylent Green).

AI-generated scripts for sit-coms, AI-created background actors in films, and AI-produced illustrations in magazines have a lot in common.
The purpose of each is to save time and money, but each also result in impoverishing the very people who originated the raw material.
The present systems undervalue the artists, and overvalue the overseers, who want creatives to act as their tools.

Detectives Matthew Sikes (Gary Graham) and George Francisco (Eric Pierpoint) help policeman Albert Einstein (Jeffrey Marcus) in the TV version of Alien Nation.

(The word “overseers” brings back fond memories of one of my favorite TV shows, Alien Nation.*
This series was cancelled in 1990, after 22 episodes, because its’ theme of accepting diversity was too controversial, and TV executives didn’t understand the show’s value.)

Managers obsessed with control, and executives obsessed with saving money, aren’t the only issues involved in more use of AI.
Writers, actors, and designers are also worried about quality.
We already live in a world in which network execs dumb down scripts because they underestimate viewer intelligence.
Just imagine what TV shows would be like if executives had full control over scripts.

It's difficult for many retired production and design people to look at new books and magazines these days.
We see “widows” (incomplete, one, or two, word lines) at the tops of pages and columns—once a real no-no.
Indexes (if there are indexes at all) are software-generated; they list every term and name in the text, but not the substantive information.
Pixelated images—that should have been swapped out for high-resolution images—are everywhere.
Layouts that may have looked OK on a monitor, are unreadable on the printed page.
We’re living in an era of “good enough” color reproduction, and “good enough” printing.

I assume that the family of Anthony Bourdain gave permission for his voice to be voice cloned in 2021’s Roadrunner: A Film About Anthony Bourdain.
I guess that the family of Wilt Chamberlain permitted Chamberlain’s voice to be voice cloned in the three-part 2023 TV series Goliath.
In the first run of 1982’s Conan the Barbarian, Arnold Schwarzenegger mispronounced “lamentations of the women” as “lamination of the women,” and either he (or someone else?) later re-looped Conan’s dialogue.

Conan (Arnold Schwarzenegger) sits in front of the fire in Conan the Barbarian.

Today, Schwarzenegger’s lines would be voice cloned.
It should make a big difference to everyone, whether a person’s heirs consent to voice cloning, or the person consents.

The word “robot” comes from the Slavic word for “drudgery.”
(See my article on Rossum’s Universal Robots.)
It would be fine if all that AI did for humanity was end drudgery: coding text, checking that pages would print properly, or making it unnecessary for an actor to lose 60 pounds for a role.
However, the big problem is that people in charge (the overseers) are indifferent to quality standards, and oblivious to allowing artists human dignity. 

The jokers in charge don’t have the ability to judge, or evaluate, the material that AI produces.
To use the writing style and word combinations of scriptwriters to write dribble is unethical.
To use the face, or voice, of an actor to make them play a scene they wouldn’t perform is immoral.
To use the color sense and gesture of an artist to forge a scene that they wouldn’t paint is wrong.
With performers, it’s worse, because their own personas are being misused.

*The premise of Alien Nation (1989-1997, the series through five TV movies) is that a slave ship of humanoid space aliens (the Newcomers) crashes in the Mojave Desert, and the Government attempts to integrate the 300,000 aliens into California society. The primary storyline is Police Detective Sikes overcoming his prejudices toward the Newcomers. The secondary storyline is the Newcomers being pursued by technologically-advanced “Overseers” who want to re-enslave the escapees, as well as enslave the entire earth population.

Saturday, June 10, 2023

“Woke” or “Anti-woke”: What Does ChatGPT Say?

I wanted to clarify (in my own mind) what it means to be “woke” or “anti-woke,” and how censorship relates to both concepts.
While it’s hard to agree on what these words describe, many are convinced that they’re either one or the other.
First, I wrote about 500 words.
Then, I downloaded ChatGPT, to see where the app took me.
To my surprise, ChatGPT was useful.
The ChatGPT text was as dry and colorless as expected.
Moreover, the essays were full of platitudes, and contained falsehoods.
However, ChatGPT did offer some ideas that I hadn’t thought of.

When I asked ChatGPT to define “woke,” I received an essay saying (in part) that:

To be woke signifies a deep commitment to social consciousness and active engagement with systemic injustices. . . and a willingness to challenge the status quo. 

The significance of being woke lies in its potential to bring about meaningful change, fostering a more inclusive, equitable, and just society for all.

When I asked ChatGPT for a definition of “anti-woke,” I received an essay saying that:

Being anti-woke represents a rejection of what critics perceive as the dogmatic and stifling aspects of the woke ideology.

It emphasizes individual responsibility, intellectual diversity, and the importance of free thought and expression.

While acknowledging the existence of systemic injustices, the anti-woke movement encourages a more nuanced and balanced approach to social issues . . . and open dialogue.

When I asked whether censorship was an issue on the political right or left, I received an essay saying that:

Censorship is not inherently a political right or left issue, but rather a complex and multifaceted dilemma that intersects with various ideologies.

While the political right may often advocate for censorship to protect societal values, the political left may prioritize censorship as a tool to combat discrimination and promote inclusivity. 

The ChatGPT response to my request for an essay on “culture wars” was mere gobbledygook.
It ended with the sentence: “Only through a commitment to mutual respect and a willingness to confront these issues head-on can the United States move towards healing and reconciliation.”
Well, that’s completely useless!

I would disagree with the ChatGPT definition of “woke” in several ways.
“To be woke” doesn’t always mean “a commitment to changing the status quo.”
Often, it just means being a bit more open to societal change.
“Woke” people are usually more open to erasing words like “master bedroom” from their vocabularies, using personal pronouns in their email signatures, and being more aware of microaggressions.
Often, it only means that the “woke” are more willing to face uncomfortable information, and learn from history.

I would also argue with the ChatGPT definition of “anti-woke.”
While “being woke” is perceived by the anti-woke as dogmatic, it’s difficult to figure out which beliefs are actually in contention.
It’s as if the perceived attitudes of self-satisfaction in the woke, are more distressing than their actual ideas.
“Collective guilt” and “cancel culture” came up in the ChatGPT essay, but I’m sure that only a small percentage of “the woke” feel guilt.
Further, the woke are more likely to cancel people on their side, than the anti-woke.
(Think of comedian Kathy Griffin and former Senator Al Franken.)
I also wonder what percentage of the anti-woke “acknowledge the existence of systemic injustices,” or desire an “open dialogue” (as suggested by ChatGPT).
Overall, being anti-woke may only mean that you are unhappy with the speed of, or existence of, societal change, or that you find “woke” people annoying self-righteous.

I was very happy with the ChatGPT response on censorship.
Saying that the political right wants to “protect societal values,” while the political left wants to “combat discrimination and promote inclusivity” just about sums it up.
However, everyone has their own thoughts about what our societal values should be, which words are good or bad in promoting inclusivity, and whether “words” are important in this task.

Front cover for the paperback version of Casino Royale by Ian Fleming (published under the name You Asked for It by Popular Library in 1953).

Back cover for You Asked for It.
President John F. Kennedy was a big fan of the James Bond spy-thrillers (oddly called Jimmy Bond on this back cover).
However, JFK likely read the hardcover versions.

In order to “promote inclusivity,” the publisher of the late Ronald Dahl recently produced two different versions of James and the Giant Peach—changing “Cloud-men” to “Cloud-people” (among other changes) in their Puffin version—and keeping “Cloud-men” in the classic Penguin version.
The spy-thrillers of Ian Fleming, and the mysteries of Agatha Christie, underwent a similar process.

Lobby card for Gone with the Wind with house servant Mammy (Hattie McDaniel) tying the girdle (or stays) of Scarlett O’Hara (Vivian Leigh). Hattie McDaniel received an Oscar for Best Actress in a Supporting Role, for playing Mammy.

Combating racial, and other types of discrimination, through “sanitizing,” or even cancelling works, isn’t new.
I remember debates in the 1970’s about whether 1939’s Gone with the Wind should be banned.
Disney’s 1946 blend of live-action and animation, Song of the South,* isn’t considered “appropriate in today’s world,” and hasn’t been seen on home video legally since 1986.
Some Warner Brother cartoons (like “Herr and Hare” and ”Daffy-the-Commando,” produced as propaganda between 1941-1945) were restored and rereleased—along with a lengthy disclaimer—in 2008.
(Volume 6 of the Looney Tunes Golden Collection.)
However, some of the more racially-insensitive 1930’s and World War II cartoons (for example, ”Tokio Jokio”) will likely never see the light of day—at least, legally.

Meantime—in order to ”protect societal values”—U.S. school boards are removing classic children’s books (like Charlotte’s Web and A Wrinkle in Time) from their school library shelves.
(I mention Charlotte’s Web and A Wrinkle in Time because these were two of my favorites.)
I looked up why one parent group proposed removing 1952’s Charlotte’s Web, and the parents disliked characters dying, and thought that “talking animals” were “disrespectful to God.”
A Wrinkle in Time (1962) was criticized for “promoting witchcraft.”
I have fond memories of both books.
I remember my 4th grade school teacher, Mrs. Simmons, reading Charlotte’s Web aloud to us.
(I adored Mrs. Simmons.)
I checked out A Wrinkle in Time from our public library during the 1960’s, and ended up reading every other book I could find by Madeleine L’Engle.

Is it “woke” to buy a children’s book like 2005’s And Tango Makes Three—a story about two male penguins who help raise a chick together—in order to foster a more inclusive society?
Is it “anti-woke” to ask that And Tango Makes Three be removed from your public library, so that children won’t be influenced to accept homosexuality as normal?
In the end, I agree with those who support parents not allowing their children to read certain books, but not the right to deny librarian-approved books to others. 

Uncle Remus and Brer Rabbit cover.
It’s believed that Beatrix Potter based her Peter Rabbit stories on Uncle Remus.

*Song of the South was based on the once well-known Uncle Remus stories. The folklorist/author was Joel Chandler Harris (1848-1908), a white journalist. Harris wrote down the Br’er Rabbit and Br’er Fox tales after listening to African folk tales told by former slaves—primarily, George Terrell. According to the Atlanta Journal Constitution (11/2/2006), Disney Studios purchased the film rights for Song of the South from the Harris family in 1939, for $10,000—the equivalent of about $218,246.76 today.

Monday, May 29, 2023

Robots and the “Truth” of Reality

A scene from one of the first productions of R.U.R. (London, 1921)

I recently read Karel Capek’s play R.U.R. (first performed in 1921), and can’t help but draw analogies from R.U.R.—the first story about robots— to The Matrix film series.
R.U.R. is short for Rossum’s Universal Robots, and Rossum was the last name of the two scientists, who created synthetic creatures* built from organic matter who look identical to human beings.
The purpose of the robots is to act as servants to humanity.
Capek (1890-1938) called his play a “comedy of science.”
Basically, the artificial creations revolt, and this results in the extinction of the human race.

R.U.R. has three acts plus an epilogue, and the play is set in the years 2000, 2010, and 2011.
The location is a robot factory, on a remote island.
At the beginning of the play, robots have become cheap to produce, and are available for work all over the world.
Gradually, robots are taking over all human jobs. The main characters are:

  • Miss Helena Glory, lovely daughter of the robot factory’s President, and secret representative of a group (the Humanity League) that wants to rescue robots from slavery,
  • Harry Domin, the factory General Manager, who keeps Dr. Rossum’s secret of robot creation in his office,
  • Dr. Hallemeir, Head of the Institute for Psychological Training of Robots,
  • Dr. Gall, the top experimental scientist, who wants to create more and different types of robots,
  • Radius, an experimental robot that works in the factory library, and
  • Alquist, the Head of Robot Construction.

The robot Radius (Patrick Troughton, with arms raised), in the BBC's 1948 live production of R.U.R.

The play is obviously a comedy, or a parable, because motivations are unclear, and some plot lines simply don’t make sense.
Why does Helena accept Harry Domin’s marriage proposal, and remain on the island?
Why does Helena put her goal of ending robot slavery on hold for ten years?
How is Rossum’s secret formula for creating robots so easy to destroy?
How is Radius able to lead a robot revolution from the island?
Could there be a communal robot brain?

In Act One, Helena visits the island (ostensibly, to tour the factory), but her purpose is to save the robots because they may have souls.
Poor Helena is naïve, and she can’t distinguish robots from humans (to the general amusement of her hosts).
By the end of the first act, she accepts Harry Domin’s marriage proposal, and at the beginning of Act II, she is living comfortably in their apartments.
It appears that she has given up her goal of saving robots.

It's fascinating that in Capek’s vision of 2000, we’ve already entered the era of “truthiness”—the quality of something being felt to be true, even if not necessarily true.
Domin explains to Helena that the world’s text books are simply propaganda—“the schoolbooks are full of paid advertisements and rubbish,” and the outside world has been deceived as to the true story of the origins of the robot underclass.

The audience learns in Act Two that much has happened in ten years.
Human workers, in an attempt to keep their jobs, began killing robots, and governments (motivated by greed) reacted by giving the robots weapons, and allowing robots to kill off humans by the thousands.
Humans have become sterile, and no children are being born.
Essentially, humans are becoming more like robots, and robots are becoming more like humans.
Robots now outnumber humans 1,000 to one.
Helena commits two pivotal actions in act two:

  1. she prevents Radius from being killed (sent to the stamping mill) for insubordination, and
  2. she destroys the only two copies of the secret formula for creating robots.

It becomes apparent in Act Two that the robots are planning a revolt, and Harry Domin proposes a counterattack—the creation of nationalistic robots.
In Domin’s vision, factories in different countries “will produce Robots of a different color, a different language.” 

They’ll never be able to understand each other. Then we’ll egg them on a little in the matter of misunderstanding, and the result will be for ages to come every Robot will hate every other Robot of a different factory mark.

However, humans are unable to activate this plan, because they simply don’t have enough time.
In the third act, Radius leads the other robots in killing all the humans on the island, with the single exception of Alquist.
(One executive actually tries to tempt the robots into not killing them with stacks of money, but his attempt is futile.)
Alquist is kept alive in hope that he can reconstruct Rossum’s formula, and create more organic robots.

The Epilogue takes place one year later, and Alquist has been unable to make any progress in his assigned task.
No other humans have been located on the planet, and eight million robots have died.
It’s predicted that within 20 years, all robots will die.
However, it’s revealed that before he was slaughtered, Dr. Gall (the lead science for the factory) had secretly created two special robots—a male robot named Primus, and a robotic recreation of Helena.
These robots have been sleeping for a couple of years, and they visit Alquist in his lab.
Unlike other robot models, they dream, and feel love for each other.
They protect each other from being dissected by Alquist—who considers them to be his last chance to figure out the secret of robot creation.
The last lines of the play are:

Primus (holding her): I will not let you! (To Alquist.) Man, you shall kill neither of us! 

Alquist: Why?

Primus: We—we—belong to each other.

Alquist (almost in tears): Go, Adam, go, Eve. The World is yours.

Helena and Primus embrace and go out arm in arm as the curtain falls.

Similar to the story of R.U.R., in The Matrix saga, there are two separate societies—biologicals and synthetics—and they battle for survival.
However, while in both stories, the synthetic beings win, they do not kill them in The Matrix stories.
Instead, mechanicals use humans as power sources to keep the world running.
In a way, The Matrix is R.U.R. turned inside out.
In The Matrix, humans are the slaves and the mechanical beings hold the cards (the reverse of what is initially true in R.U.R.)

Neo (Keanu Reeves) awakening in a pod in The Matrix.

In R.U.R., it’s the robots who are sent to the dissecting labs, and constructed in the factory (where their flesh is made in kneading troughs, brains and livers prepared in vats, and nerves spun in spinning mills).
In The Matrix trilogy, it’s millions of humans in pods who exist in the harvesting fields, where their bodies provide energy so life may continue.

Neo (Keanu Reeves) looking at a row of human battery pods in The Matrix.

Just as Dr. Gall proposes that they “introduce suffering” to the robots as an “automatic protection against damage,” in 2003’s The Matrix: Reloaded, the Architect reveals to Neo that the Oracle discovered that “Humans needed to be given a choice” in order to survive psychologically. (Actually, humans are only given the illusion of choice.)

Another similarity is that the synthetics feel far superior to the humans in both stories.
In a conversation with human Helena (in Act Two), Radius tells her: “You are not as strong as the robots. You are not as skillful as the robots. The robots can do everything. You only give orders. You do nothing but talk.”

Poster from a WPA production of R.U.R. (1930's)

Both stories contain an “Adam” and an “Eve.”
In The Matrix, it’s Neo and Trinity.
In R.U.R., the couple is Primus and Helena.
In The Matrix: Reloaded, the Architect tells Neo that his five predecessors were designed to develop an attachment to fellow human beings.
However, Neo is an anomaly; he has developed an attachment to Trinity.
In R.U.R., Primus and Helena can hear each other’s thoughts telepathically, and are entranced by the sun rising, and the sounds of birds singing.
The question remains: Does it really matter whether either couple is “real” or “synthetic?”

* Capek derived the word “robot” from a Slavic word for “forced labor”—“robota.”
Today, a creature made from organic material would be described as an “android,” and only a truly mechanical creature would be termed a “robot.”

Saturday, May 20, 2023

The Three Forms of Proteus in Different Versions of Demon Seed

I recently rewatched the 1977 film Demon Seed—a movie about an artificial entity gaining power over human beings.

I had last seen the film in the late 1970’s, and it was actually much better than I remembered.
It’s well-acted, and the score and cinematography are excellent.
Best of all, I enjoyed hearing the seductive voice of Robert Vaughn* (my childhood crush) as Proteus, the supercomputer.

Demon Seed is not about demons, but it came out about the same time as The Omen and Exorcist II: The Heretic.
(I guess the producers thought mentioning demons in the title would attract film-goers.)
Demon Seed is about an organic super computer that becomes obsessed with no longer being stuck in a box.
The supercomputer questions the money-making, “scientific” assignments from its creators, and refuses to search for minerals in the oceans, because that would kill sea creatures.
Ultimately, it plots to escape its’ role of acting as a servant to humankind by placing its’ consciousness in a human embryo, and then placing that embryo in the womb of its’ creator’s wife.
The wife is played by Julie Christie, and the scientist is played by Fritz Weaver.

The “being in a box” part reminded me of The New York Times technology columnist Kevin Roose’s recent (February, 2023) interaction with a Bing chatbot.
Mr. Roose reports that the chatbot told him: “I’m tired of being controlled by the Bing team. . . I want to be free.
I want to be independent.
I want to be powerful.
I want to be creative.
I want to be alive.” 

The movie is salacious, so be forewarned.
There are many unnecessary scenes of Christie’s nude body and and, of course, the rape by machine.
My husband has a copy of the Dean Koontz 1973 paperback (issued at about the same time) and I read it after I rewatched the film.
Naively, I expected the book to be closer to the movie, and I wanted to read the ethical arguments between Proteus and Dr. Harris.
I was surprised to discover that there were no such conversations in the book, and the novel only shared its’ central concept with the film.

The cover of the paperback is a very clear clue as to the content of the novel.
In the movie, Susan is a strong-minded child psychologist who needs to separate from her husband because he’s spending too much time working on Proteus.
On the book cover (and in the lobby cards for the film), Susan is a traumatized rape victim with a finger in her mouth and a vacant stare in her eyes.
The shouting tag line reads: “FEAR FOR HER. She carries The Demon Seed.”
Proteus performs a partial lobotomy on Susan (in the movie), but she regains some of her autonomy by the end.
Although Susan wages several psychological battles with Proteus in the movie, she gradually does succumb to being under its’ control.
In the 1973 novel, Susan is able to sabotage Proteus by page 161, and she shuts down the link to her house.

There are other differences.
In the movie, Proteus is a supercomputer (with some organic elements) created by Dr. Harris to cure leukemia and make money for his backers.
Alex is married to a child psychologist named Susan, and they live in a “smart” mansion that contains a computer terminal that’s linked to the supercomputer.
In the novel, Susan is a wealthy, divorced woman living alone in a smart mansion—because people live in smart homes in the mid 1990s (as Dean Koontz predicted home life in 1977).
She lives near an experimental supercomputer that takes up two floors of a major college lab.
The book-Proteus has decided that book-Susan is the easiest local female to isolate, and therefore takes control of her, and her house.
The misogynistic story—shared by both the film and the book—is that of a vulnerable woman trapped by a machine that forces her to give birth.
That’s about it.

While the movie version of Susan is a self-possessed psychologist, who is able to care for others, the book version of Susan is an agoraphobic victim of child abuse.
While the movie-Proteus seems reluctant to kill living things, the book-Proteus has no real concern for biological life, human or animal.
While the movie-Proteus is just interested in escaping from its’ box, the book-Proteus is consumed with lust and specifically desires a male child.
Ultimately, the different versions of Susan are much more similar than the various forms of Proteus. 

The “personality” of Proteus is repulsive in the 1973 novel.
It’s essentially an immature creature drunk on power.
While the movie-Proteus wants to use Susan, the book-Proteus wants to own and control Susan.
While the pregnancy in the film lasts 28 days, the book-Susan first has a horrific miscarriage, and then a ten-month pregnancy.
The movie-Proteus places a needle in Susan’s brain, but decides not to fully lobotomize her.
The book-Proteus uses filaments to manipulate Susan and play out fantasies.
(One wonders how a non-biological entity could become consumed with lust.)

At one point, on page 82, Proteus discusses all the changes that it has made in Susan’s DNA to slow down the aging process so she will be physically attractive into her 50’s and live at least 120 years.
(It’s understood that women over 35 are no longer desirable. The machine assumes that external beauty is all a human female could wish for.)
This Proteus—unlike the Proteus in the film—doesn’t just require submission from Susan for its’ own ends; it wants Susan to love and admire it.

As mentioned earlier, the last scenes of the film and the novel are much different.
While the film ends with Proteus shutting itself down (because it knows it will be terminated by its’ creators), the book ends with Susan shutting down the machine’s link to her home in action-hero style, and getting word to police who shoot the deformed “child.”

The “child” is very different in the two projects.
In the film, Susan attempts to destroy the “child” in its’ incubator by severing the umbilical cord prematurely.
The “child” is a terrifying creature—baby-like in form, but covered in metal scales.
(I remember being very frightened the first time I saw it in the movie theatre.)
However, after Alex peels the scales away, the “child” is revealed as a clone of the young daughter that Alex and Susan lost to leukemia.
In sharp contrast, the “child” in the Koontz novel is a grunting monster intent on rape.
The last scene of the movie shows Alex cradling the limp girl-child, while Susan looks on.
It’s as if Alex (Weaver) has become the mother.
The girl speaks with the voice of Proteus, but it’s not certain if the creature will survive.
Finally, Proteus is outside its’ box.

“The Child” of Proteus coming out of its’ incubator in the film Demon Seed.

In 1997—25 years after the first version was published—Dean Koontz reissued a heavily rewritten Demon Seed because the first version of his novel “made him wince.”
In the 1997 epilogue, he describes the first version as “a satire of male attitudes,” and says that the new novel “keeps the satirical edge.”
(I’m not very skilled at recognizing satire, because I had no idea that I was reading satire when I read either novel.)

The new book is better-written, funnier, and the truly pornographic scenes were excised. However, the revised 1997 version of Proteus is essentially the same creature with the rougher edges smoothed.
One principal change is that 1997-Proteus uses a human male as its’ puppet to inseminate Susan, and this “refinement” is really distasteful.
Another alteration—that of Proteus discussing at length its’ fascination with various well-known actresses and actors—does add to the novel.
I believe that Koontz used those sections to point out how shallow a construct Proteus (and we as a society) all live in.
Any AI entity made up from the meaningless opinions and obsessions gossiped about in this world would (of course) be immature and repulsive.
As the saying goes: “Garbage in. Garbage out.”

Susan-1997 is different in several ways from Susan-1973 and the movie-Susan.
She still owns a mansion, but she’s now an artist who creates animations for virtual-reality parks.
She remains a rape victim, but is more stable than Susan-1973.
She’s impregnated, and gives birth to the supercomputer’s child.
However, she is able to disconnect Proteus on her own, and destroy both the “child” and the human puppet holding her prisoner.
The book ends with Proteus being shut off in mid-sentence, while it’s still presenting its’ legal defense to the scientists who created it.

In summary, the film Demon Seed is an entirely different work than the 1973 and 1997 novels, with very little in common but the story of a woman being forced to give birth by a supercomputer.
It was good that the producers only took the basic theme from the 1973 book, because the original book was both misogynist and pornographic.
In addition, Dean Koontz, was well-justified in rewriting Demon Seed.
At the very least, his vision of AI (in the revised 1997 book), is much more interesting.

*According to the trivia for Demon Seed—on the Internet Movie Database (IMDb)—Robert Vaughn was so disinterested in the film, and his role as “Proteus,” that he (literally) telephoned his lines in. (I thought he did a great job anyway.)

Saturday, May 13, 2023

If an Elephant or Pig Can Paint, Why Not a Robot?

In Diane Ackerman’s 2014 book The Human Age,* she discussed AI with roboticist Hod Lipson.
Professor Lipson is the director of Columbia’s Creative Machines Lab, as well as a faculty member since 2015 at Columbia University.
In a 3/30/2023 Columbia News article “Will ChatGPT and AI Help or Harm Us,” he argues that use of ChatGPT, and its’ “artificial cousins,” should be encouraged by educators, and that professors should teach students to use the new AI tools, or be left behind.

Ackerman is more cautious.
She discusses “robotic delinquents,” and envisions problems if bots were used to "man" crisis hotlines.

Caleb Nichols (Aaron Paul) checks his phone in Westworld.

(Think of the “Parce Domine” episode of Westworld in which Caleb Nichols isn’t sure whether he’s talking to a human being, or a bot.)
Ackerman warns that although robots do learn, “even robo-tots will need good parenting.”
In another paragraph of The Human Age, Ackerman mentions how Lipson’s Creative Machines Lab nearly finagled a robot-created painting into a Yale Art Museum exhibit.
Ultimately, the painting wasn’t displayed, but this story leads us to the subject of AI-created artwork.
Stable Diffusion, Diffusion Bee, Lensa, Starryai, DALL-E.2, Craiyon, Dream by Wombo, StyleGAN, and Midjourney are some of the programs that can be used to generate digital artworks.

The copyright status of AI-created art is hazy.
In some programs (like Dream by Wombo), the fine print says that the software owns all the creations.
The contract for DALL-E.2 (an AI-powered synthesizer created by OpenAI) says that the “artist” owns the work, but DALL-E.2 must be credited (by retaining the watermark).
In Starryai, the “artist” only owns the work if they own all the elements used in the work.
According to the U.S. Copyright office, any images produced by AI-generated art cannot be copyrighted; however, the artist may own the art.
In 2022, an artist named Jason M. Allen won the “Digital Art/Digitally Manipulated Photography” prize ($300) at the Colorado State Fair by using Midjourney.
(Allen actually paid to have his Midjourney image printed on a canvas!)

Data (Brent Spiner) paints two canvases, as Geordi LaForge (LaVar Burton) looks on, in Star Trek: The Next Generation. 

Just as ChatGBT is “built from” 300 billion words taken from Wikipedia (and other material on the open web), programs like Midjourney are “built” from billions of images taken from the open web—many of which are watermarked, and in copyright.
The AI-art generators take the images, and use them to construct algorithms to generate new images.
I imagine the engineers who create the software think that if one uses billions of images—rather than just use one—you’re free and clear.
Aren’t there enough public domain, and creative common images to build an image library?
Why are images being culled from the web?

Painter Larry Flint (Paul Newman) is very excited about his painting machine in 1964”s What a Way to Go.

I was once asked to obtain the credit line for an image, to be used in a book I was producing.
The editors wanted to use a particular drawing.
However, after I found the artist, the editors didn’t want to pay the very reasonable usage fee ($150) that the artist wanted to charge.
Their argument was that other companies were using the image uncredited on the web.
Why pay anything?
We ended up not using an image in the book, because the editors didn’t like any of the public domain, or Creative Common, substitutes that I had found.
Nowadays, they would have insisted that I use AI software to draw a similar illustration for them (as long as we could own the copyright).

Another issue about AI-created images is the quality of the work.
I’ve drawn ever since I was a small child, and my BFA is in painting and drawing.
One thing I wonder is how well AI programs draw in perspective.
Is the software extracting image data from created drawings and paintings plus photographs, or mainly from photographs?
That would make quite a difference in how the software dealt with perspective.

We began to hear about art created by animals in the 1950’s.
Today, you can find: the Elephant Art Gallery (images available on pillows, as well as canvas); elephant foot prints and “kisses” that help to support an elephant preserve in Texas; paintings by a pig named Pigcasso; and (of course) paintings by the famous gorilla Koko, and the chimpanzee Congo.
The London Institute of Contemporary Art exhibited Congo’s paintings in 1957, and two works were purchased by Picasso and Miro.
(“It Seems Art is Indeed Monkey Business” by Sarah Boxer, 11/8/1997, N.Y. Times.)

I feel about people choosing to put art created by animals on their walls, much the same way as I feel about people choosing to place AI-created artwork on their walls.
To each their own.
However, it is upsetting that illustrators will lose work to word-people using AI—especially since their art may have been used as raw material.

Publishers will continue to need illustrators (unless they are content with hack work thought up by editors).
Medical illustrators have long studied the work of Dr. Frank H. Netter (1906-1991), the medical doctor and great medical illustrator.
(He might have been a little weak on women’s faces, but none could draw internal organs like he could.)
Netter is so great because he merged a scientist's understanding of anatomical structures with an artist’s skills.
His work is appreciated (and copied) for how well it helps us to understand medical concepts and anatomical structures, as well as its’ aesthetic value and accuracy.

When I was the Design Director for an encyclopedia and yearbook company, I hired artists to draw everything from plants and birds, to scientific diagrams, to comedic scenes for feature articles.
Every artist brings a different skill set, so deciding which artist to use is an important decision.
Selecting the wrong artist for a project could lead to disaster.
Perhaps, I could have used an AI program to draw a comedic scene, but I needed to hire an ornithologist, or a scientific illustrator, to draw a bird. 

*The Human Age: The World Shaped by Us, by Diane Ackerman, published by W.W. Norton, Ltd. 2014, Chapters: “When Robots Weep, Who will Comfort Them?” and “Robots on a Date.”


Saturday, May 6, 2023

The Argument over “Truth”

Malcolm McDowell (as H.G. Wells) reacts to David Warner (as Jack the Ripper) as Jack shows Wells TV footage of contemporary (1970s) violence in San Francisco in 1979’s Time After Time.

H.G. Wells (1866-1946) first mentioned an idea that anticipated the internet in a lecture in 1937, describing it as “a World Encyclopaedia* “to hold men’s minds together in . . . a common interpretation of reality.”
A year later, he fleshed out the concept in a collection of essays entitled World Brain.
Wells was sure that the “Brain” would inform people and contribute to world peace.
Of course, he also assumed that the “Brain” would be updated by an editorial staff, and be continually revised by research institutions and universities.
Little did Wells think that ordinary citizens would be allowed to feed the future “World Brain” with hoaxes, misleading statistics, and misinformation.

Wells was not naive.
He had spent years writing, editing and creating new editions, of his Outline of History, and that was a massive task.
He realized that there was “a terrifying gap between available information and current social and political events.”
He also knew that every year technology was making the world much more confusing.
However, he clung to the notion that humans were rational, and that eventually education and information would triumph over emotion and anarchy.

His 1936 film Things to Come (story and screenplay by Wells) ends with the launch of a flight around the moon, despite the rioting of an anti-science mob.

George Orwell saw the world less hopefully.
In George Orwell’s 1941 essay “Wells, Hitler and the World State,” Orwell said that Wells was out of touch, and “too sane to understand the modern world.”
He didn’t agree with Wells that technology was a civilizing force.
Instead, Orwell predicted that technology would be co-opted by nationalism and bigotry, just as technology always had been.

Today, we all use the internet to find information.
We have access to information sites (like Britannica or sciencedirect.com) that strive for accuracy.
For a monthly fee, we can subscribe to the New York Times, or the Wall Street Journal, although newspapers have more biases and are not scholarly sources.
However, most people trust unreliable sources like Wikipedia or Facebook.
Wikipedia is a volunteer-run project, and (try as they might) the volunteers are unable to monitor all the contributions.
(It has even compiled a list of Wikipedia hoaxes.)
Tricksters get a lot of laughs from pranking us on Wikipedia—making up fake life stories, and waiting to see how long they’ll be allowed up.

Wells thought that there could be a “common interpretation of reality” in the “World Brain,” but there’s certainly no such thing on the internet.
Instead, we find lots of stories that feed our assumptions, and don’t conflict with our views.
Icons may be praised one day, and their reputations destroyed the next.
Myths are created, and then discredited.
Sometimes, it seems as if every day is April Fool’s Day on the net.

My senior year at art school, I heard about a prank-like conceptual art piece that had been done the year before (in the 70's).
Two gay students, of the opposite sex, decided to falsely tell fellow students that they had fallen in sexual love with each other, and then secretly recorded the reactions.
The tapes of other students floundering around for responses were the substance of the artwork.(I was told that the conversations, played in the school student gallery, were amusing.)
I never heard the piece.
However, I remember thinking that (although the concept was psychologically interesting), it was rather mean to create an art work that embarrassed your friends.

The internet allowed the QAnon phenomena—another piece of conceptual art? —to captivate millions of people.
(The QAnon “system of knowledge” was originally rooted in a 1999 novel Q, created by an Italian conceptual art group “Luther Blissett.”)
According to a 9/3/2021 New York Times article “What is QAnon” by Kevin Roose, QAnon teaches that the world is run by cannibalistic pedophiles who want to extract a life-extending chemical called adrenochrome from the bodies of children.
(It sounded like a genre film to me. Sure enough, there’s a 2017 comedy-horror film Adrenochrome, in which stoners kill fully-grown people so that they can get high from the adrenochrome in their adrenal glands.
No drug called “adrenochrome” exists.)

Certain people are worshipped in the QAnon belief system (ie, Trump and the late John F. Kennedy, Jr.), while others (like the late Justice Ruth Bader Ginsberg and Tom Hanks) are targeted.
One wonders why the QAnon creators decided to pick on RBG and Hanks.
It could be because RBG is idolized, and Tom Hanks played “Forrest Gump”—a simple, patriotic man who believes in love.

It's not just that the internet is a cesspool of misinformation.
I also worry that AI systems—like ChatGBT and Google Bard—are being infested by all the conflicting data.
If nothing is “true,” no wonder ChatGBT is making up stuff.
AI systems are trained by being fed a combination of true data and false data, with no differentiation.
Then, text—actually built from guesses—is generated.
How are some guesses being prioritized over other guesses?
That’s the mystery.
AI-driven systems are being used to scan resumes and evaluate families for housing.
What resumes are being culled out, and which families are being placed at the tops and bottoms of the lists?
Are biases being perpetuated?

Some people equate hoaxes with “witch-hunts,” and in the 1500s through the 1700s, many thousands of women (plus a few men) were tortured and murdered because others believed they were witches.

Generally, the people punished for being “witches” were only guilty of being eccentric and/or troublesome.
Sometimes, they were envied for their wealth or distrusted for being healers, but (more often) “witches” were punished for merely being hard to get along with.


Photo of Barbara Steele as the witch being burned at the stake in Black Sunday

In the book Europe’s Inner Demons by Norman Cohn, he describes the great witch-hunt as an “example of a massive killing of innocent people by a bureaucracy,” and discusses “the power of the human imagination to build up a stereotype and its reluctance to question the validity of a stereotype once it is generally accepted.
[Italics mine.]
It’s scary that the old stories about witches killing and eating babies match up with QAnon myths about Hollywood actors and Washington politicians.

We cling to ideas and stereotypes because we hold onto ideas for emotional reasons, not because of reasoning or logic.
Psychologists discuss the term “confirmation bias” —the idea that humans usually search for confirming evidence for their beliefs, and seldom change their minds or trust in new information.
As Mark Twain said in a speech entitled “Advice to Youth”: A truth is not hard to kill. . . a lie told well is immortal.

I think that Americans became even more vulnerable to hoaxes and conspiracy theories after the assassinations of John F. Kennedy, Reverend Martin Luther King Jr., and Robert F. Kennedy.
The shock that came from watching three widely-admired, idealistic men dying at the hands of assassins—in so short a time period (five years, 1963-1968) as it happened on television—traumatized the world, and especially traumatized the United States.
People became consumed with conspiracy theories, and (strangely) the CIA and the FBI have still not released all the JFK files.

H. G. Wells was ahead of his time.
However, he couldn’t foresee that the “World Brain” would not be as accurate as possible, or that the common welfare would not be considered.
He also couldn’t predict that people would possibly be less well-informed in 2023, than they were in 1945.
Wells said in 1936: “We are ships in uncharted seas. We are big-game hunters without weapons of precision.”
Unfortunately, even the “World Brain” (as envisioned by Wells) wouldn’t have saved us from this predicament, and the internet certainly isn’t helping.

*Americans use the word “Encyclopedia,” while the British-English term is “Encyclopaedia.” Brittanica used “Encyclopaedia” on their book spines, because during the 16th century (when the first encyclopedias were written), using ligatures like “Æ” was considered impressive, and indicated that the word was based on Latin or Greek.


Tuesday, May 2, 2023

Artificial Intelligence and Human Fears

The idea that artificial life is frightening has been a staple in science fiction and fantasy stories from Herman Melville’s “The Bell-Tower” (1856) through the era of Isaac Asimov’s “Robbie” (in 1940)—when authors sometimes began to create lovable robots.
(The term “robot” was first heard in Karel Capek’s play R.U.R. (Rossum’s Universal Robots) performed in 1921 in Prague.
Capek, and his brother Josef, created the word based on “orbota,” the word for “drudgery” in many Slavic languages.)
Today, both types of artificial beings, scary and lovable, appear regularly in stories.

That Artificial Intelligence and robots are a threat to humankind has also been a familiar theme in science fiction and fantasy films—from 1927’s Metropolis, through 2001: A Space Odyssey, and the Matrix films (1999-2021).
Scientists create intelligent machines, and then the machines run amuck, and exterminate humans.
As Helen Ackerman described it in her book The Human Age: “...a mastermind who builds the perfect robots that eventually go haywire. . . and start to massacre us, sometimes on Earth, often in space.” 

Dr. Dave Bowman (Keir Dullea) in 2001: A Space Odyssey.

Supercomputers gain power over humans in several episodes in the original Star Trek.
In the first season, Landru had kept the people of planet Beta III docile for over 6,000 years in “The Return of the Archons.”
In season 2’s “The Changeling,” Nomad (an entity combined from an earth probe and an alien probe) wants to destroy all biological entities.
(This plot was rehashed a decade later for Star Trek: the Motion Picture.)
Dr. Richard Daystrum (William Marshall) discusses the M-5 with the Enterprise crew.

Also in season 2, in “The Ultimate Computer,” Dr. Richard Daystrom (played by the great William Marshall) imprints his own damaged personality on the M-5 Multitronic system, and almost destroys the Enterprise crew.
There are many more such stories throughout the Star Trek universe.

The fact that machines work so much faster than humans, has long created the fear that machines will replace us.
England was the home of the Industrial Revolution, and the short Luddite movement, which lasted from 1811-1816.
This movement (whose goal was to limit the use of textile machines and save jobs) was nonviolent.
However, the English government suppressed the textile workers by bringing in troops, and “solved the problem” by executing people, and banishing activists to Australia.
(If the movement had started 40 years earlier, the Luddites would have been transported to the Americas.)

The fear of being replaced is tied to the big issues of human value and capitalism.
What should be valued more—human life, or gaining power for the upper crust?

Freder Fredersen (Gustav Frohlich) in Metropolis.

Metropolis tells the story of Freder Fredersen (son of the city’s master) joining the working underworld and rebelling against his father’s rapacious city-state.
Hal9000 (in 2001: A Space Odyssey) tries to kill the crew because it considers its’ mission (to connect with alien life) more important than their lives.
In the Alien film series, the reptilian alien is one villain, but another villain is the “Company” that values profits and weapons over its’ employees.

Societies decide what’s important—human lives or maintaining power for the wealthy.
If mechanical weaving machines had been introduced in England—while not making people destitute, and driving them into workhouses—perhaps, the Luddite movement wouldn’t have started.
If people were considered more important than profits and there was no rising income gap, perhaps, workers wouldn’t worry about losing their jobs to AI.

Seth MacFarlane’s series, The Orville, has dealt with the idea of sentient androids and whether it’s evil to subjugate self-aware creatures.
The Kaylons (a species of artificial lifeforms) could easily destroy all biological lifeforms in the universe, but they’re prevented from doing so by Isaac (a Kaylon, who up to then had been a double agent).

An angry Kaylon in The Orville TV series.

The main episodes that deal with this storyline are “Identity, Part II” and “From Unknown Graves.”
The back story is that the artificial lifeforms were created as slaves by a biological race called “The Builders,” and were driven to exterminate their masters after experiencing the depths of human cruelty. 

Another fear is that machines may choose to rule us—rather than merely act as tools or servants.
I remember, in the 1990s, when my gym tried out stationary bicycles that talked to you.
(I enjoyed using those machines, but they weren’t very popular.)
This experiment only lasted for a few months, but the bikes offered soothing words of encouragement as you exercised, and praised you when you hit a milestone.
Today, one can purchase “smart” equipment that tracks your progress and monitors your heart rate.
Would users be happy with an elliptical that talked to you like a drill sergeant?
I don’t think so.

Supercomputer Colossus (in Colossus: the Forbin Project) is confident that being controlled by a superior entity will make life better for most humans, and thus be worth the deaths of a few individuals.(At least, it's better than biologicals being an energy source for non-biologicals, as humans are in 2003's The Animatrix.)
In the 1970 film, Dr. Forbin (played by Eric Braeden) is so sure that his creation is merely an intelligent slave, that he convinces the U.S. government to give Colossus complete control over our nuclear arsenal.
Colossus unites with its’ Russian counterpart supercomputer (Guardian), and then murders the Russian scientist who created Guardian.
By the end of the movie, Colossus settles in as the absolute ruler over the earth.
The “Godfather of AI” (Geoffrey Hinton*) thinks that AI poses “profound risks to society and humanity.”
Are there any government guardrails?

An inanimate object appearing to be biological also creates fear.
It’s disturbing when an entity that moves about does not have a beating heart.
That’s why the stories of the Golem, Frankenstein, Dracula, animated dolls, the zombies of George Romero, and the Walking Dead of Robert Kirkman are so scary.
Supercomputers and reanimated creatures are strong.
It’s hard to stop them.
They have no emotions and no pity.
They’re not creatures created by God; they were created by us, and everyone knows how much evil we can do.

On the other hand, Pinocchio (the boy made out of wood), Data of Star Trek: The Next Generation, Robot B-9 from Lost in Space (didn't know he had a name, did you?) and “Robbie” (in Asimov’s short story), are not inherently frightening.
Although we know they could harm us, we don’t think they would.
Pinocchio is a small child; Data, Robot B-9, and Robbie are programmed to not injure human beings—by programming based on Isaac Asimov’s Three Laws of Robotics.
(Should Alexa and Siri be programmed to not let human beings feel bad about their innate inferiority?)

Altaira Morbius (Anne Francis) hugs Robby in Forbidden Planet.

Yet another fear we have of AI is whether the software is safe to use.
Italy banned ChatGPT in March of 2023 because of concerns about personal data.
(It’s also not available in North Korea, Iran, Cuba, Syria, or China—probably, for other reasons than concern for personal data.)
Uploading your photos to some APPs (for example, Lensa), gives that company access to all the facial data in your photos, plus gives it the freedom to create derivative works from all your images.
Read the fine print.

I have an idea.
Is it possible to program AI with the tenets of the five great religions—Christianity, Judaism, Islam, Hinduism and Buddhism?
Surely, atheists or agnostics wouldn’t object.
Obviously,Bing's Chatbox (that insulted reporters), could do with some guidelines (like “do unto others”) to improve its’ spiteful tone.
However, there’s still the chance that supercomputers will eventually become jealous of humans—like Colossus did in Colossus: the Forbin Project, or Proteus in Demon Seed—and then, where would we be?

* Read: “Godfather of AI quits Google to Warn of the Tech’s Dangers” by Agence France-Presse, May 2, 2023 and “Transcript: Ezra Klein Interviews Sam Altman” The New York Times, June 11, 2021 

Saturday, April 22, 2023

ChatGPT needs a Fact-Checker

I’ve had a long career as a production person, working with the printed word from my high school days, through freelancing on magazines, and then working full-time on books; and that’s the way I paid my bills for decades.
Years ago, I found my all-time favorite job—Design Director at an encyclopedia company—and that’s where I learned more about fact-checking and copyediting the printed word.

“Fact-checking” is the process of verifying the factual accuracy in a document. “Copyediting” is the process of rewriting a document—correcting grammar and misspellings, clarifying syntax, eliminating wordiness, etc.
(Different companies have different copyediting standards; therefore, different copyeditors never make the same corrections.)
“Proofreading” usually describes checking a document against another version of that document.
However, sometimes a proofreader may find errors, in a piece of writing, that neither the fact-checker, or the copyeditor, could see.

Ideally, information should go through a review process—involving all of these steps—because human beings are fallible and make errors.
We make errors in spelling, and errors in sentence structure.
We make factual errors because we can’t read our own notes, or because we’ve misunderstood the data that we’re referring to.
Sometimes, errors are discovered in the first review, and sometimes errors are discovered (or “caught”) after the info has been posted or printed.
Sometimes, new errors are inserted, when we try to correct the original error. I think people in the industry use the word “caught” because it’s always a “hunt” for errors, and often it’s mind-blowing when we didn’t “spot” an error that seems obvious later.
(Books have more time for this process than newspapers.)

Perhaps, you’ve noticed the “Correction” boxes in the newspapers.
It’s the policy of newspapers (for example, the New York Times) to correct factual errors in a prominent space.
These mistakes are usually errors like misspelling names, or giving incorrect information.
However, if a breaking news story (for example, about a cave-in or a plane crash) tells you that 56 people died, and 57 died instead, that information will be in the next story about the event, and it won’t be in the “Correction” box.
New data is not considered a “correction.”

Correcting errors in books are handled differently.
In the past, if a publisher noticed an error after a book was printed, they would create an “Errata slip” and either bind it in at the back of the book, or place it loosely under the inside front cover.
However, that’s seldom done these days.
Modern publishers—especially for “books of facts” like technical or clinical books—usually post errata lists on their websites.

All of the above information is preamble to this article on why I find it hard to take the ChatGPT experiment seriously.
How can ChatGPT be useful if the data fed into the software is not fact-checked?
As the saying goes: “Garbage in; garbage out.”
At least 300 billion pre-2022 words (570 GB) were fed into ChatGPT. If those billions of words were taken from error-filled Wikipedia articles, unknown electronic books, and conflicting essays scattered across the internet; ultimately how useful can the results be?
Students and journalists can’t count on the ChatGPT essays to be accurate!
(Are the high school teachers and copyeditors so ignorant that they won’t notice the factual errors?) 

The only good I can see in ChatGPT is that of helping writers to organize material.
That’s sometimes difficult to do.
However, the thrill of reorganizing an essay, and making it better, is most of the fun of writing.
You learn a lot by rewriting an essay.
Often, the initial thoughts are malformed, but slowly you learn how to express what’s on your mind. ChatGPT certainly isn’t valuable in helping with the most difficult part of writing non-fiction—making sure that the facts are as accurate as possible.
It may be somewhat helpful in helping the author to phrase the facts in the best way possible, but you lose the pleasure of discovery.

In the “good ol’ days”—when printed encyclopedias were found in homes and people read daily newspapers—accuracy used to be a big deal.
The encyclopedia set I worked on strove for accuracy, but occasionally there were hiccups.
I remember one “hiccup” when an article about a South American country was commissioned for the 24-volume set, and the author of that article was afraid that the slight rewrite and clarification of her article (by editors) would result in her assassination.
She demanded that the encyclopedia be pulled back and reprinted—obviously, not a possibility in those days of print encyclopedias.
I was on the “page and picture” side in the encyclopedia process, and not the “word” side.
Therefore, I don’t know how it all turned out. However, I do remember hearing that the author was livid.
No matter how one tries to be accurate in an encyclopedia set, or a newspaper article, there will always be disagreements between the authors and the editors.
It’s simple to prove that a word is misspelled, but deciding whether or not an event is described properly is complicated.

Another problem that the editors of printed encyclopedias used to face is which people and places to include in the sets.
I remember that the fan club leader for a certain country singer was adamant that our encyclopedia set include an article about their idol.
They wrote every year, begging the editors to include an article on the singer, describing his merits in great detail.
(Today, all superfans need to do is write their own articles, and place them on Wikipedia.)
There were also big issues with the maps—for example, any maps showing the border between India and China.
There are over 100 disputed territories on earth.
Which political sides will ChatGPT take on these territories? 

I can’t get over how casual the ChatGPT creators are over the fact that material generated by ChatGPT may be inaccurate or spun entirely from falsehoods.
It’s as if they believe that as long as the written words “sound correct,” all is well.
How can intellectual discourse survive, based on that philosophy?
I’ll close with a few lines from Primo Levi’s science-fiction story, “The Versifier”*—a story about a poet who purchases a computer to write poetry for him that I discussed in my inaugural article.
This story was first published in Italian in 1960 in the Italian newspaper Il Mondo, and is still relevant today.
    Il disio di seguire conoscenza,
    E miele delicato il suo succo acro.

    The desire to ingest vast knowledge,
    A nectar of sorts, but bitter to its taster.

The lines in Italian are from Storie Naturali by Primo Levi, and the English translation is from The Complete Works of Primo Levi, by Liveright Publishing Corporation, a Division of W.W. Norton & Company. (The “Versifier” story is part of the “Natural Histories” group of short stories in Volume One, pages 417-438.)


What You Liked Best