Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Saturday, June 10, 2023

“Woke” or “Anti-woke”: What Does ChatGPT Say?

I wanted to clarify (in my own mind) what it means to be “woke” or “anti-woke,” and how censorship relates to both concepts.
While it’s hard to agree on what these words describe, many are convinced that they’re either one or the other.
First, I wrote about 500 words.
Then, I downloaded ChatGPT, to see where the app took me.
To my surprise, ChatGPT was useful.
The ChatGPT text was as dry and colorless as expected.
Moreover, the essays were full of platitudes, and contained falsehoods.
However, ChatGPT did offer some ideas that I hadn’t thought of.

When I asked ChatGPT to define “woke,” I received an essay saying (in part) that:

To be woke signifies a deep commitment to social consciousness and active engagement with systemic injustices. . . and a willingness to challenge the status quo. 

The significance of being woke lies in its potential to bring about meaningful change, fostering a more inclusive, equitable, and just society for all.

When I asked ChatGPT for a definition of “anti-woke,” I received an essay saying that:

Being anti-woke represents a rejection of what critics perceive as the dogmatic and stifling aspects of the woke ideology.

It emphasizes individual responsibility, intellectual diversity, and the importance of free thought and expression.

While acknowledging the existence of systemic injustices, the anti-woke movement encourages a more nuanced and balanced approach to social issues . . . and open dialogue.

When I asked whether censorship was an issue on the political right or left, I received an essay saying that:

Censorship is not inherently a political right or left issue, but rather a complex and multifaceted dilemma that intersects with various ideologies.

While the political right may often advocate for censorship to protect societal values, the political left may prioritize censorship as a tool to combat discrimination and promote inclusivity. 

The ChatGPT response to my request for an essay on “culture wars” was mere gobbledygook.
It ended with the sentence: “Only through a commitment to mutual respect and a willingness to confront these issues head-on can the United States move towards healing and reconciliation.”
Well, that’s completely useless!

I would disagree with the ChatGPT definition of “woke” in several ways.
“To be woke” doesn’t always mean “a commitment to changing the status quo.”
Often, it just means being a bit more open to societal change.
“Woke” people are usually more open to erasing words like “master bedroom” from their vocabularies, using personal pronouns in their email signatures, and being more aware of microaggressions.
Often, it only means that the “woke” are more willing to face uncomfortable information, and learn from history.

I would also argue with the ChatGPT definition of “anti-woke.”
While “being woke” is perceived by the anti-woke as dogmatic, it’s difficult to figure out which beliefs are actually in contention.
It’s as if the perceived attitudes of self-satisfaction in the woke, are more distressing than their actual ideas.
“Collective guilt” and “cancel culture” came up in the ChatGPT essay, but I’m sure that only a small percentage of “the woke” feel guilt.
Further, the woke are more likely to cancel people on their side, than the anti-woke.
(Think of comedian Kathy Griffin and former Senator Al Franken.)
I also wonder what percentage of the anti-woke “acknowledge the existence of systemic injustices,” or desire an “open dialogue” (as suggested by ChatGPT).
Overall, being anti-woke may only mean that you are unhappy with the speed of, or existence of, societal change, or that you find “woke” people annoying self-righteous.

I was very happy with the ChatGPT response on censorship.
Saying that the political right wants to “protect societal values,” while the political left wants to “combat discrimination and promote inclusivity” just about sums it up.
However, everyone has their own thoughts about what our societal values should be, which words are good or bad in promoting inclusivity, and whether “words” are important in this task.

Front cover for the paperback version of Casino Royale by Ian Fleming (published under the name You Asked for It by Popular Library in 1953).

Back cover for You Asked for It.
President John F. Kennedy was a big fan of the James Bond spy-thrillers (oddly called Jimmy Bond on this back cover).
However, JFK likely read the hardcover versions.

In order to “promote inclusivity,” the publisher of the late Ronald Dahl recently produced two different versions of James and the Giant Peach—changing “Cloud-men” to “Cloud-people” (among other changes) in their Puffin version—and keeping “Cloud-men” in the classic Penguin version.
The spy-thrillers of Ian Fleming, and the mysteries of Agatha Christie, underwent a similar process.

Lobby card for Gone with the Wind with house servant Mammy (Hattie McDaniel) tying the girdle (or stays) of Scarlett O’Hara (Vivian Leigh). Hattie McDaniel received an Oscar for Best Actress in a Supporting Role, for playing Mammy.

Combating racial, and other types of discrimination, through “sanitizing,” or even cancelling works, isn’t new.
I remember debates in the 1970’s about whether 1939’s Gone with the Wind should be banned.
Disney’s 1946 blend of live-action and animation, Song of the South,* isn’t considered “appropriate in today’s world,” and hasn’t been seen on home video legally since 1986.
Some Warner Brother cartoons (like “Herr and Hare” and ”Daffy-the-Commando,” produced as propaganda between 1941-1945) were restored and rereleased—along with a lengthy disclaimer—in 2008.
(Volume 6 of the Looney Tunes Golden Collection.)
However, some of the more racially-insensitive 1930’s and World War II cartoons (for example, ”Tokio Jokio”) will likely never see the light of day—at least, legally.

Meantime—in order to ”protect societal values”—U.S. school boards are removing classic children’s books (like Charlotte’s Web and A Wrinkle in Time) from their school library shelves.
(I mention Charlotte’s Web and A Wrinkle in Time because these were two of my favorites.)
I looked up why one parent group proposed removing 1952’s Charlotte’s Web, and the parents disliked characters dying, and thought that “talking animals” were “disrespectful to God.”
A Wrinkle in Time (1962) was criticized for “promoting witchcraft.”
I have fond memories of both books.
I remember my 4th grade school teacher, Mrs. Simmons, reading Charlotte’s Web aloud to us.
(I adored Mrs. Simmons.)
I checked out A Wrinkle in Time from our public library during the 1960’s, and ended up reading every other book I could find by Madeleine L’Engle.

Is it “woke” to buy a children’s book like 2005’s And Tango Makes Three—a story about two male penguins who help raise a chick together—in order to foster a more inclusive society?
Is it “anti-woke” to ask that And Tango Makes Three be removed from your public library, so that children won’t be influenced to accept homosexuality as normal?
In the end, I agree with those who support parents not allowing their children to read certain books, but not the right to deny librarian-approved books to others. 

Uncle Remus and Brer Rabbit cover.
It’s believed that Beatrix Potter based her Peter Rabbit stories on Uncle Remus.

*Song of the South was based on the once well-known Uncle Remus stories. The folklorist/author was Joel Chandler Harris (1848-1908), a white journalist. Harris wrote down the Br’er Rabbit and Br’er Fox tales after listening to African folk tales told by former slaves—primarily, George Terrell. According to the Atlanta Journal Constitution (11/2/2006), Disney Studios purchased the film rights for Song of the South from the Harris family in 1939, for $10,000—the equivalent of about $218,246.76 today.

Saturday, May 6, 2023

The Argument over “Truth”

Malcolm McDowell (as H.G. Wells) reacts to David Warner (as Jack the Ripper) as Jack shows Wells TV footage of contemporary (1970s) violence in San Francisco in 1979’s Time After Time.

H.G. Wells (1866-1946) first mentioned an idea that anticipated the internet in a lecture in 1937, describing it as “a World Encyclopaedia* “to hold men’s minds together in . . . a common interpretation of reality.”
A year later, he fleshed out the concept in a collection of essays entitled World Brain.
Wells was sure that the “Brain” would inform people and contribute to world peace.
Of course, he also assumed that the “Brain” would be updated by an editorial staff, and be continually revised by research institutions and universities.
Little did Wells think that ordinary citizens would be allowed to feed the future “World Brain” with hoaxes, misleading statistics, and misinformation.

Wells was not naive.
He had spent years writing, editing and creating new editions, of his Outline of History, and that was a massive task.
He realized that there was “a terrifying gap between available information and current social and political events.”
He also knew that every year technology was making the world much more confusing.
However, he clung to the notion that humans were rational, and that eventually education and information would triumph over emotion and anarchy.

His 1936 film Things to Come (story and screenplay by Wells) ends with the launch of a flight around the moon, despite the rioting of an anti-science mob.

George Orwell saw the world less hopefully.
In George Orwell’s 1941 essay “Wells, Hitler and the World State,” Orwell said that Wells was out of touch, and “too sane to understand the modern world.”
He didn’t agree with Wells that technology was a civilizing force.
Instead, Orwell predicted that technology would be co-opted by nationalism and bigotry, just as technology always had been.

Today, we all use the internet to find information.
We have access to information sites (like Britannica or sciencedirect.com) that strive for accuracy.
For a monthly fee, we can subscribe to the New York Times, or the Wall Street Journal, although newspapers have more biases and are not scholarly sources.
However, most people trust unreliable sources like Wikipedia or Facebook.
Wikipedia is a volunteer-run project, and (try as they might) the volunteers are unable to monitor all the contributions.
(It has even compiled a list of Wikipedia hoaxes.)
Tricksters get a lot of laughs from pranking us on Wikipedia—making up fake life stories, and waiting to see how long they’ll be allowed up.

Wells thought that there could be a “common interpretation of reality” in the “World Brain,” but there’s certainly no such thing on the internet.
Instead, we find lots of stories that feed our assumptions, and don’t conflict with our views.
Icons may be praised one day, and their reputations destroyed the next.
Myths are created, and then discredited.
Sometimes, it seems as if every day is April Fool’s Day on the net.

My senior year at art school, I heard about a prank-like conceptual art piece that had been done the year before (in the 70's).
Two gay students, of the opposite sex, decided to falsely tell fellow students that they had fallen in sexual love with each other, and then secretly recorded the reactions.
The tapes of other students floundering around for responses were the substance of the artwork.(I was told that the conversations, played in the school student gallery, were amusing.)
I never heard the piece.
However, I remember thinking that (although the concept was psychologically interesting), it was rather mean to create an art work that embarrassed your friends.

The internet allowed the QAnon phenomena—another piece of conceptual art? —to captivate millions of people.
(The QAnon “system of knowledge” was originally rooted in a 1999 novel Q, created by an Italian conceptual art group “Luther Blissett.”)
According to a 9/3/2021 New York Times article “What is QAnon” by Kevin Roose, QAnon teaches that the world is run by cannibalistic pedophiles who want to extract a life-extending chemical called adrenochrome from the bodies of children.
(It sounded like a genre film to me. Sure enough, there’s a 2017 comedy-horror film Adrenochrome, in which stoners kill fully-grown people so that they can get high from the adrenochrome in their adrenal glands.
No drug called “adrenochrome” exists.)

Certain people are worshipped in the QAnon belief system (ie, Trump and the late John F. Kennedy, Jr.), while others (like the late Justice Ruth Bader Ginsberg and Tom Hanks) are targeted.
One wonders why the QAnon creators decided to pick on RBG and Hanks.
It could be because RBG is idolized, and Tom Hanks played “Forrest Gump”—a simple, patriotic man who believes in love.

It's not just that the internet is a cesspool of misinformation.
I also worry that AI systems—like ChatGBT and Google Bard—are being infested by all the conflicting data.
If nothing is “true,” no wonder ChatGBT is making up stuff.
AI systems are trained by being fed a combination of true data and false data, with no differentiation.
Then, text—actually built from guesses—is generated.
How are some guesses being prioritized over other guesses?
That’s the mystery.
AI-driven systems are being used to scan resumes and evaluate families for housing.
What resumes are being culled out, and which families are being placed at the tops and bottoms of the lists?
Are biases being perpetuated?

Some people equate hoaxes with “witch-hunts,” and in the 1500s through the 1700s, many thousands of women (plus a few men) were tortured and murdered because others believed they were witches.

Generally, the people punished for being “witches” were only guilty of being eccentric and/or troublesome.
Sometimes, they were envied for their wealth or distrusted for being healers, but (more often) “witches” were punished for merely being hard to get along with.


Photo of Barbara Steele as the witch being burned at the stake in Black Sunday

In the book Europe’s Inner Demons by Norman Cohn, he describes the great witch-hunt as an “example of a massive killing of innocent people by a bureaucracy,” and discusses “the power of the human imagination to build up a stereotype and its reluctance to question the validity of a stereotype once it is generally accepted.
[Italics mine.]
It’s scary that the old stories about witches killing and eating babies match up with QAnon myths about Hollywood actors and Washington politicians.

We cling to ideas and stereotypes because we hold onto ideas for emotional reasons, not because of reasoning or logic.
Psychologists discuss the term “confirmation bias” —the idea that humans usually search for confirming evidence for their beliefs, and seldom change their minds or trust in new information.
As Mark Twain said in a speech entitled “Advice to Youth”: A truth is not hard to kill. . . a lie told well is immortal.

I think that Americans became even more vulnerable to hoaxes and conspiracy theories after the assassinations of John F. Kennedy, Reverend Martin Luther King Jr., and Robert F. Kennedy.
The shock that came from watching three widely-admired, idealistic men dying at the hands of assassins—in so short a time period (five years, 1963-1968) as it happened on television—traumatized the world, and especially traumatized the United States.
People became consumed with conspiracy theories, and (strangely) the CIA and the FBI have still not released all the JFK files.

H. G. Wells was ahead of his time.
However, he couldn’t foresee that the “World Brain” would not be as accurate as possible, or that the common welfare would not be considered.
He also couldn’t predict that people would possibly be less well-informed in 2023, than they were in 1945.
Wells said in 1936: “We are ships in uncharted seas. We are big-game hunters without weapons of precision.”
Unfortunately, even the “World Brain” (as envisioned by Wells) wouldn’t have saved us from this predicament, and the internet certainly isn’t helping.

*Americans use the word “Encyclopedia,” while the British-English term is “Encyclopaedia.” Brittanica used “Encyclopaedia” on their book spines, because during the 16th century (when the first encyclopedias were written), using ligatures like “Æ” was considered impressive, and indicated that the word was based on Latin or Greek.


Tuesday, May 2, 2023

Artificial Intelligence and Human Fears

The idea that artificial life is frightening has been a staple in science fiction and fantasy stories from Herman Melville’s “The Bell-Tower” (1856) through the era of Isaac Asimov’s “Robbie” (in 1940)—when authors sometimes began to create lovable robots.
(The term “robot” was first heard in Karel Capek’s play R.U.R. (Rossum’s Universal Robots) performed in 1921 in Prague.
Capek, and his brother Josef, created the word based on “orbota,” the word for “drudgery” in many Slavic languages.)
Today, both types of artificial beings, scary and lovable, appear regularly in stories.

That Artificial Intelligence and robots are a threat to humankind has also been a familiar theme in science fiction and fantasy films—from 1927’s Metropolis, through 2001: A Space Odyssey, and the Matrix films (1999-2021).
Scientists create intelligent machines, and then the machines run amuck, and exterminate humans.
As Helen Ackerman described it in her book The Human Age: “...a mastermind who builds the perfect robots that eventually go haywire. . . and start to massacre us, sometimes on Earth, often in space.” 

Dr. Dave Bowman (Keir Dullea) in 2001: A Space Odyssey.

Supercomputers gain power over humans in several episodes in the original Star Trek.
In the first season, Landru had kept the people of planet Beta III docile for over 6,000 years in “The Return of the Archons.”
In season 2’s “The Changeling,” Nomad (an entity combined from an earth probe and an alien probe) wants to destroy all biological entities.
(This plot was rehashed a decade later for Star Trek: the Motion Picture.)
Dr. Richard Daystrum (William Marshall) discusses the M-5 with the Enterprise crew.

Also in season 2, in “The Ultimate Computer,” Dr. Richard Daystrom (played by the great William Marshall) imprints his own damaged personality on the M-5 Multitronic system, and almost destroys the Enterprise crew.
There are many more such stories throughout the Star Trek universe.

The fact that machines work so much faster than humans, has long created the fear that machines will replace us.
England was the home of the Industrial Revolution, and the short Luddite movement, which lasted from 1811-1816.
This movement (whose goal was to limit the use of textile machines and save jobs) was nonviolent.
However, the English government suppressed the textile workers by bringing in troops, and “solved the problem” by executing people, and banishing activists to Australia.
(If the movement had started 40 years earlier, the Luddites would have been transported to the Americas.)

The fear of being replaced is tied to the big issues of human value and capitalism.
What should be valued more—human life, or gaining power for the upper crust?

Freder Fredersen (Gustav Frohlich) in Metropolis.

Metropolis tells the story of Freder Fredersen (son of the city’s master) joining the working underworld and rebelling against his father’s rapacious city-state.
Hal9000 (in 2001: A Space Odyssey) tries to kill the crew because it considers its’ mission (to connect with alien life) more important than their lives.
In the Alien film series, the reptilian alien is one villain, but another villain is the “Company” that values profits and weapons over its’ employees.

Societies decide what’s important—human lives or maintaining power for the wealthy.
If mechanical weaving machines had been introduced in England—while not making people destitute, and driving them into workhouses—perhaps, the Luddite movement wouldn’t have started.
If people were considered more important than profits and there was no rising income gap, perhaps, workers wouldn’t worry about losing their jobs to AI.

Seth MacFarlane’s series, The Orville, has dealt with the idea of sentient androids and whether it’s evil to subjugate self-aware creatures.
The Kaylons (a species of artificial lifeforms) could easily destroy all biological lifeforms in the universe, but they’re prevented from doing so by Isaac (a Kaylon, who up to then had been a double agent).

An angry Kaylon in The Orville TV series.

The main episodes that deal with this storyline are “Identity, Part II” and “From Unknown Graves.”
The back story is that the artificial lifeforms were created as slaves by a biological race called “The Builders,” and were driven to exterminate their masters after experiencing the depths of human cruelty. 

Another fear is that machines may choose to rule us—rather than merely act as tools or servants.
I remember, in the 1990s, when my gym tried out stationary bicycles that talked to you.
(I enjoyed using those machines, but they weren’t very popular.)
This experiment only lasted for a few months, but the bikes offered soothing words of encouragement as you exercised, and praised you when you hit a milestone.
Today, one can purchase “smart” equipment that tracks your progress and monitors your heart rate.
Would users be happy with an elliptical that talked to you like a drill sergeant?
I don’t think so.

Supercomputer Colossus (in Colossus: the Forbin Project) is confident that being controlled by a superior entity will make life better for most humans, and thus be worth the deaths of a few individuals.(At least, it's better than biologicals being an energy source for non-biologicals, as humans are in 2003's The Animatrix.)
In the 1970 film, Dr. Forbin (played by Eric Braeden) is so sure that his creation is merely an intelligent slave, that he convinces the U.S. government to give Colossus complete control over our nuclear arsenal.
Colossus unites with its’ Russian counterpart supercomputer (Guardian), and then murders the Russian scientist who created Guardian.
By the end of the movie, Colossus settles in as the absolute ruler over the earth.
The “Godfather of AI” (Geoffrey Hinton*) thinks that AI poses “profound risks to society and humanity.”
Are there any government guardrails?

An inanimate object appearing to be biological also creates fear.
It’s disturbing when an entity that moves about does not have a beating heart.
That’s why the stories of the Golem, Frankenstein, Dracula, animated dolls, the zombies of George Romero, and the Walking Dead of Robert Kirkman are so scary.
Supercomputers and reanimated creatures are strong.
It’s hard to stop them.
They have no emotions and no pity.
They’re not creatures created by God; they were created by us, and everyone knows how much evil we can do.

On the other hand, Pinocchio (the boy made out of wood), Data of Star Trek: The Next Generation, Robot B-9 from Lost in Space (didn't know he had a name, did you?) and “Robbie” (in Asimov’s short story), are not inherently frightening.
Although we know they could harm us, we don’t think they would.
Pinocchio is a small child; Data, Robot B-9, and Robbie are programmed to not injure human beings—by programming based on Isaac Asimov’s Three Laws of Robotics.
(Should Alexa and Siri be programmed to not let human beings feel bad about their innate inferiority?)

Altaira Morbius (Anne Francis) hugs Robby in Forbidden Planet.

Yet another fear we have of AI is whether the software is safe to use.
Italy banned ChatGPT in March of 2023 because of concerns about personal data.
(It’s also not available in North Korea, Iran, Cuba, Syria, or China—probably, for other reasons than concern for personal data.)
Uploading your photos to some APPs (for example, Lensa), gives that company access to all the facial data in your photos, plus gives it the freedom to create derivative works from all your images.
Read the fine print.

I have an idea.
Is it possible to program AI with the tenets of the five great religions—Christianity, Judaism, Islam, Hinduism and Buddhism?
Surely, atheists or agnostics wouldn’t object.
Obviously,Bing's Chatbox (that insulted reporters), could do with some guidelines (like “do unto others”) to improve its’ spiteful tone.
However, there’s still the chance that supercomputers will eventually become jealous of humans—like Colossus did in Colossus: the Forbin Project, or Proteus in Demon Seed—and then, where would we be?

* Read: “Godfather of AI quits Google to Warn of the Tech’s Dangers” by Agence France-Presse, May 2, 2023 and “Transcript: Ezra Klein Interviews Sam Altman” The New York Times, June 11, 2021 

Saturday, April 22, 2023

ChatGPT needs a Fact-Checker

I’ve had a long career as a production person, working with the printed word from my high school days, through freelancing on magazines, and then working full-time on books; and that’s the way I paid my bills for decades.
Years ago, I found my all-time favorite job—Design Director at an encyclopedia company—and that’s where I learned more about fact-checking and copyediting the printed word.

“Fact-checking” is the process of verifying the factual accuracy in a document. “Copyediting” is the process of rewriting a document—correcting grammar and misspellings, clarifying syntax, eliminating wordiness, etc.
(Different companies have different copyediting standards; therefore, different copyeditors never make the same corrections.)
“Proofreading” usually describes checking a document against another version of that document.
However, sometimes a proofreader may find errors, in a piece of writing, that neither the fact-checker, or the copyeditor, could see.

Ideally, information should go through a review process—involving all of these steps—because human beings are fallible and make errors.
We make errors in spelling, and errors in sentence structure.
We make factual errors because we can’t read our own notes, or because we’ve misunderstood the data that we’re referring to.
Sometimes, errors are discovered in the first review, and sometimes errors are discovered (or “caught”) after the info has been posted or printed.
Sometimes, new errors are inserted, when we try to correct the original error. I think people in the industry use the word “caught” because it’s always a “hunt” for errors, and often it’s mind-blowing when we didn’t “spot” an error that seems obvious later.
(Books have more time for this process than newspapers.)

Perhaps, you’ve noticed the “Correction” boxes in the newspapers.
It’s the policy of newspapers (for example, the New York Times) to correct factual errors in a prominent space.
These mistakes are usually errors like misspelling names, or giving incorrect information.
However, if a breaking news story (for example, about a cave-in or a plane crash) tells you that 56 people died, and 57 died instead, that information will be in the next story about the event, and it won’t be in the “Correction” box.
New data is not considered a “correction.”

Correcting errors in books are handled differently.
In the past, if a publisher noticed an error after a book was printed, they would create an “Errata slip” and either bind it in at the back of the book, or place it loosely under the inside front cover.
However, that’s seldom done these days.
Modern publishers—especially for “books of facts” like technical or clinical books—usually post errata lists on their websites.

All of the above information is preamble to this article on why I find it hard to take the ChatGPT experiment seriously.
How can ChatGPT be useful if the data fed into the software is not fact-checked?
As the saying goes: “Garbage in; garbage out.”
At least 300 billion pre-2022 words (570 GB) were fed into ChatGPT. If those billions of words were taken from error-filled Wikipedia articles, unknown electronic books, and conflicting essays scattered across the internet; ultimately how useful can the results be?
Students and journalists can’t count on the ChatGPT essays to be accurate!
(Are the high school teachers and copyeditors so ignorant that they won’t notice the factual errors?) 

The only good I can see in ChatGPT is that of helping writers to organize material.
That’s sometimes difficult to do.
However, the thrill of reorganizing an essay, and making it better, is most of the fun of writing.
You learn a lot by rewriting an essay.
Often, the initial thoughts are malformed, but slowly you learn how to express what’s on your mind. ChatGPT certainly isn’t valuable in helping with the most difficult part of writing non-fiction—making sure that the facts are as accurate as possible.
It may be somewhat helpful in helping the author to phrase the facts in the best way possible, but you lose the pleasure of discovery.

In the “good ol’ days”—when printed encyclopedias were found in homes and people read daily newspapers—accuracy used to be a big deal.
The encyclopedia set I worked on strove for accuracy, but occasionally there were hiccups.
I remember one “hiccup” when an article about a South American country was commissioned for the 24-volume set, and the author of that article was afraid that the slight rewrite and clarification of her article (by editors) would result in her assassination.
She demanded that the encyclopedia be pulled back and reprinted—obviously, not a possibility in those days of print encyclopedias.
I was on the “page and picture” side in the encyclopedia process, and not the “word” side.
Therefore, I don’t know how it all turned out. However, I do remember hearing that the author was livid.
No matter how one tries to be accurate in an encyclopedia set, or a newspaper article, there will always be disagreements between the authors and the editors.
It’s simple to prove that a word is misspelled, but deciding whether or not an event is described properly is complicated.

Another problem that the editors of printed encyclopedias used to face is which people and places to include in the sets.
I remember that the fan club leader for a certain country singer was adamant that our encyclopedia set include an article about their idol.
They wrote every year, begging the editors to include an article on the singer, describing his merits in great detail.
(Today, all superfans need to do is write their own articles, and place them on Wikipedia.)
There were also big issues with the maps—for example, any maps showing the border between India and China.
There are over 100 disputed territories on earth.
Which political sides will ChatGPT take on these territories? 

I can’t get over how casual the ChatGPT creators are over the fact that material generated by ChatGPT may be inaccurate or spun entirely from falsehoods.
It’s as if they believe that as long as the written words “sound correct,” all is well.
How can intellectual discourse survive, based on that philosophy?
I’ll close with a few lines from Primo Levi’s science-fiction story, “The Versifier”*—a story about a poet who purchases a computer to write poetry for him that I discussed in my inaugural article.
This story was first published in Italian in 1960 in the Italian newspaper Il Mondo, and is still relevant today.
    Il disio di seguire conoscenza,
    E miele delicato il suo succo acro.

    The desire to ingest vast knowledge,
    A nectar of sorts, but bitter to its taster.

The lines in Italian are from Storie Naturali by Primo Levi, and the English translation is from The Complete Works of Primo Levi, by Liveright Publishing Corporation, a Division of W.W. Norton & Company. (The “Versifier” story is part of the “Natural Histories” group of short stories in Volume One, pages 417-438.)


What You Liked Best