Taming the AI Beast: How to make ChatGPT serve, not enslave you
I’ve been playing with the ChatGPT free version for a few weeks now and I admit I’m impressed with what it can do. But I’m also concerned about the dangers inherent in this and other generative AI tools.
Here, we’ll review the excitement around the ways content creators can use ChatGPT’s pretty amazing capabilities; examine two major concerns about the quality and one about the ownership of its output; and talk about ways you can avoid those dangers.
Yes, it’s a powerful tool
And it’s not going away. You’ve seen the headlines about how ChatGPT
“… is taking the internet by storm,”
“… the AI chatbot everyone’s talking about”
“… is blowing everybody’s mind”
“… passed an MBA exam”
“… writing a novel using AI”
With millions already using it, the companies and programmers behind ChatGPT and similar generative AI platforms will only keep them growing and evolving.
So as the Borg (in Star Trek) says, “Resistance is futile.”
Then again, the humans survive in that storyline. Picard and Seven learn to take advantage of the tech.
So the real question is how can you put the tools that AI makes available to best use in your work and life?
I’ve been doing a bunch of research myself on the three parts of this post. We’ll circle back to why that’s important below.
Then, I decided to ask ChatGPT to write a post for me. Here’s the way I worded the request, focusing on business uses:
“Write a blog post describing the best ways to use ChatGPT for business purposes. Take note of the weaknesses and limitations, including database limits, false answers and copyright ownership issues. Offer suggestions for avoiding problems from these weaknesses and limitations.”
The response was a decent first draft, 624 words long. And that, my friends is one of the most powerful aspects of this tool: generating first drafts, or as I called them on LinkedIn, “instant outlines.”
But I knew enough from my own research to recognize that it was incomplete. And the writing became repetitive and overly formal for my style. For example, slight variations of the phrase “ChatGPT can only provide answers based on the data it has been trained on” appear six times in its short answer. To me, that point needed to be made only once.
The four areas where ChatGPT said it could be useful were customer service, content creation, data analysis, and personalization.
I’ll go on record saying it greatly overstates its own usefulness as a customer service tool, if my experience with chatbots giving automated answers is common. Has anyone ever received a useful answer in the 3-5 attempts a chatbot insists on making before turning you over to a human?
I’m not qualified to assess its data analysis capabilities, and if personalization means embedding the tool into a website to recognize our behaviors and deliver more relevant results, then I’ll give it that one, as many sites do a decent job in the “you may also like” area.
That leaves content creation, which is where most of the recent excitement seems to be centered. Here, I’m using only a few excerpts from the first draft generated by ChatGPT. But that doesn’t mean the exercise was useless. The draft provided several useful nuggets, a proposed sensible structure for the post, and upon reflection triggered some useful follow-up questions that we’ll explore in a moment.
I’ll wrap up this section with some links to posts and articles you may find useful in expanding your use of ChatGPT:
- The 100 Best ChatGPT Prompts to Power Your Workflow (covers topics like marketing, entrepreneurs & business, blogging, copywriting, and many more)
- 5 Ways to Use ChatGPT in Your Workflow (idea lists, first drafts, titles, research topic lists, revising to shorten text)
- ChatGPT: 30 incredible ways to use the AI-powered chatbot (this one bends my focus on business uses, but has some fun ideas and useful caveats)
Danger, Will Robinson!
Lies, Damn Lies, and Statistics
Several examples of ChatGPT and its competitors making glaring errors have been covered in the news lately. The headline of that article about ChatGPT passing an MBA exam obscures some of the problems the Wharton professor actually reported:
“Surprisingly, it performed the worst when prompted with a question that required simple math calculations. … ChatGPT at times makes mistakes in relatively simple calculations at sixth-grade level math. These mistakes can be massive in magnitude. … The present version is not capable of handling more advanced process analysis questions, even when they are based on fairly standard templates.”
Another Wharton professor called ChatGPT “a consummate bullshitter, and I mean that in a technical sense,” explaining:
“Bullshit is convincing-sounding nonsense, devoid of truth, and AI is very good at creating it. You can ask it to describe how we know dinosaurs had a civilization, and it will happily make up a whole set of facts explaining, quite convincingly, exactly that. It is no replacement for Google. It literally does not know what it doesn’t know …”
And in an article on Medium, writer Zulie Rane told the (almost) horror story of working with ChatGPT on an article for a client. She spent hours working with the AI to get some of the content and then wrote the article herself, incorporating some of the material from ChatGPT and adding her own research. But just to be doubly sure about the accuracy and quality of this article she was being paid to produce, she sent it off to a professional editor knowledgeable about the topic.
She got the article back “covered in red writing and strikethroughs and critical comments.” Her list of issues:
“It was just wrong.
“ChatGPT had fabricated facts. ChatGPT made incorrect analogies.
“ChatGPT drew distinctions that didn’t exist.
“ChatGPT assigned roles to the wrong jobs.
“Not only that, but ChatGPT got specific technologies wrong.”
Rane’s perhaps understated conclusion: “ChatGPT is not a reliable writer or researcher.”
Lest you think these are criticisms from Luddites who resent new technology, why not take ChatGPT’s own word for it?
Remember that I asked ChatGPT to include its own weaknesses and limitations in the blog post draft it generated. Its answers included:
“False Answers: ChatGPT may provide false or inaccurate answers based on the data it has been trained on.”
And a post about ChatGPT on the OpenAI blog helps explain why such errors occur:
“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; …
“Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
I followed up my intitial request by asking for examples of ChatGPT giving incorrect answers. The AI gave remarkably honest and sobering answers:
“Here are some examples of false or inaccurate answers generated by ChatGPT:
- Medical Advice: ChatGPT may generate false or inaccurate medical advice if it has been trained on outdated or unreliable medical data. For example, if a user were to ask ChatGPT for medical advice on a specific condition, the answer provided may not be accurate or may be outdated.
- Legal Advice: Similarly, ChatGPT may generate false or inaccurate legal advice if it has been trained on outdated or incorrect legal data. For example, if a user were to ask ChatGPT for legal advice on a specific legal issue, the answer provided may not be accurate or may not apply to the user’s specific situation.
- Sensitive Topics: ChatGPT may generate insensitive or inappropriate responses on sensitive topics such as race, gender, or religion. These responses may be based on biased or outdated data used to train the language model.
- Company-Specific Information: ChatGPT may generate false or inaccurate information on company-specific topics such as pricing, product features, or company policies. This may occur if the language model has not been trained on up-to-date company data or if the data is incomplete.”
When OpenAI and the chatbot themselves warn you that you might get “incorrect or nonsensical answers” to your questions, it seems wise to take notice.
Take special note of that warning from OpenAI that ChatGPT has “currently no source of truth.”
Think about what kinds of “incorrect or nonsensical” garbage might be included in its internet scrapings. And how it may go about pulling together answers when it makes its “guess what the user intended.”
Want further proof that ChatGPT will make things up? Zulie Rane noted,
“The further you push [ChatGPT], the more you’ll realize this. The more details you ask, the more you demand, the less correct it gets.”
When I first started researching the copyright questions discussed below, I “pushed” ChatGPT to provide specific sources for recent thinking and asked it for links to relevant online articles. The first time I asked, I got this response: “I’m sorry, but as a text-based AI language model, I don’t have the ability to provide links.” Having previously gotten answers with links in them, I knew this was a lie.
So I refined the question a bit and this time got a very promising answer:
Great, I thought.
Until I clicked the links:
I tried searches within each of these sites, along with web searches in Google and Bing, with and without quotation marks around the supposed titles of these alleged articles. I even tried searching the names of the authors and their online publication lists.
As far as I can tell, ChatGPT made up the articles and its descriptions of them.
It might have been true back when …
The other major area ChatGPT warns about so often it might get lost is the currency of the information it uses to generate answers. OpenAi explains, “ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022.” This translates to a database scraped from the internet in 2021.
I got the same answer from ChatGPT itself when I asked for its weaknesses:
“Database Limits: ChatGPT’s responses are limited by the data it has been trained on. [I.e., the internet, up through 2021] Therefore, it may not always have the most up-to-date or relevant information.”
Some stuff has happened since 2021, important events (e.g., mid-term elections, war in Ukraine), evolving ideas (e.g., around racial, gender, environmental justice), significant discoveries (e.g., from the Webb telescope). Much of it has been recorded in one form or another on the internet.
More stuff will keep happening.
To my follow-up question about outdated answers, ChatGPT replied:
“Here are some examples of outdated information ChatGPT may use in its answers:
- Historical Events: ChatGPT may provide information on historical events that is no longer accurate or up-to-date. For example, if a user were to ask ChatGPT about the population of a specific city, the response generated may be based on data that is several years old.
- Technology: ChatGPT may provide information on technology that is no longer accurate or up-to-date. For example, if a user were to ask ChatGPT about the specifications of a specific mobile phone, the response generated may be based on data from an earlier model or version of the phone.
- Business News: ChatGPT may provide information on business news that is no longer accurate or up-to-date. For example, if a user were to ask ChatGPT about the stock price of a specific company, the response generated may be based on data from earlier in the day or from a previous day.
- Cultural References: ChatGPT may provide information on cultural references that is no longer accurate or up-to-date. For example, if a user were to ask ChatGPT about a popular song or movie, the response generated may be based on data from several years ago.”
Recall the warning from the Wharton professor that Chat GPT “literally does not know what it doesn’t know.” And instead of admitting that up to date answers to your question are not in its database, “it will happily make up a whole set of facts” to provide you with “convincing-sounding nonsense, devoid of truth.”
We may assume the AI isn’t (yet) feeling happy about deceiving us, right?
But we should be careful, as Zulie Rane showed, about making sure we’re not using “convincing-sounding nonsense” in our work.
Who owns copyright in your work?
I’ve been noodling this question about content created by, or with assistance from ChatGPT, and my research so far indicates that nobody knows.
In one blog post analyzing the OpenAI Terms of Service that govern both ChatGPT and DALL-E, the attorney blogger opined that you may not get copyright in the output from the AI because it doesn’t meet the fundamental requirements of originality and human authorship.
Another post drew a distinction between content passively or randomly generated by the AI and that showing human involvement. At one end, quoting an attorney,
“When AI randomly generates artwork, then there’s no human authorship … ‘If you’re letting the computer take over for you, like pulling a lever on a slot machine, then it’s public domain really, anyone could take that output. You don’t have intellectual property rights to it.’ “
In contrast, the attorney noted,
“[T]he more human input that’s involved, the higher the likelihood that the artwork is eligible for copyright. When AI is used to create an artist’s specific vision with direct manipulation by the artist, then the artist would have intellectual property rights.”
In another piece, aptly titled Is Copyright Broken? Part 3: Artificial Intelligence and Author Copyright, the author notes some early attempts around the world to define copyright in AI-involved works, but acknowledges that nothing is settled. She mentions the UK’s effort,
“In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
The emphasized phrase in the UK law seems to leave open the possibility that in the case of the “slot machine” analogy quoted above, instead of nobody owning the copyright, the person(s) who wrote the code for the AI could be the ones who made “the arrangements necessary for the creation of the work.”
In my digging into this question, I was focused on U.S. copyright law. The U.S. Copyright Office has already issued a decision refusing copyright to an image “autonomously created by artificial intelligence without any creative contribution from a human actor.” Since then it’s been reported that, after initially granting registration for a graphic novel partially created using AI, the Office has asked the author “to provide details of my process to show that there was substantial human involvement in the process of creation of this graphic novel.”
From those starting points, I found my way to the Copyright Office’s Compendium, Chapter 300 – Copyrightable Authorship: What Can Be Registered, which emphasizes both the originality (§ 308) and human authorship (§ 306) requirements. It also specifically states under “Uncopyrightable Material” in § 313.2:
“Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author. The crucial question is ‘whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”
The emphasized phrases used in these sources, looking for “direct manipulation,” making the “arrangements necessary,” showing “substantial human involvement,” and “creative input or intervention,” got me thinking about other major technological breakthroughs that enabled or expanded human artistic output. One of the court cases cited in the Compendium on the question of human authorship goes way back to 1884, when photography was fairly new and the copyright ownership of the images created had not been settled.
In Burrow-Giles Lithographic Company v. Sarony, the Supreme Court considered the case of photographer Napoleon Sarony and his famous photograph of Oscar Wilde. A lithographer had reproduced the image for sale and claimed that Congress had no power under the Copyright Clause of the Constitution to protect photographs, because “a photograph is not a writing nor the production of an author.”
The Court disposed of the notion that only writing in the sense of books could be copyrighted:
“By writings in that clause is meant the literary productions of those authors, and Congress very properly has declared these to include all forms of writing, printing, engravings, etchings, etc., by which the ideas in the mind of the author are given visible expression.”
Touching on why I think the decision about photography will impact how the courts treat AI output, the Court went on:
“The only reason why photographs were not included in the extended list in the act of 1802 is probably that they did not exist, as photography, as an art, was then unknown, and the scientific principle on which it rests, and the chemicals and machinery by which it is operated, have all been discovered long since that statute was enacted.”
Still, the lithographer argued,
“[A] photograph is the mere mechanical reproduction of the physical features or outlines of some object, animate or inanimate, and involves no originality of thought or any novelty in the intellectual operation connected with its visible reproduction in shape of a picture. … the process is merely mechanical, with no place for novelty, invention, or originality.”
In finding that, indeed, Sarony had proven that his photograph involved elements “of originality, of intellectual production, of thought, and conception on the part of the author,” the Court noted that the trial judge had made findings of fact that the photograph was a:
“useful, new, harmonious, characteristic, and graceful picture, and that [the photographer] made the same . . . entirely from his own original mental conception, to which he gave visible form by posing the said Oscar Wilde in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and from such disposition, arrangement, or representation, made entirely by [the photographer], he produced the picture in suit.”
It’s worth reading that set of findings over a few times, I think, as it should give insights into how human content creators can and should partner with these new AI tools to generate content they can proudly publish and call their own.
The result in the photography cases also calls into question that language in the UK law ever being interpreted to place copyright with the company or programmers who create and provide these AI tools. That would be as if the courts had decided that Kodak or George Eastman owned copyrights in the pictures generated by the cameras and developing services they provided.
Mastering your tools
Like any other craft, writing and creating images have always challenged humans to develop and then learn how to use new tools. Over time, I think these new AI tools will seem no different.
The main skillsets you’ll need to master based on the exciting potential and worrisome problems aren’t much different from the past, after all.
- Experimenting with and learning to articulate your own questions that produce promising output. This is like Sorony arranging the setting for Wilde’s photo.
- Testing and verifying that the output is sensible, up to date, and true. Like Sorony making sure his camera, film, and the subsequent development process worked properly.
- Then, blending in your own original research and writing to make certain the ultimate result gives visible expression to ideas from your mind. Like posing and evoking the right expression from your subject.
I hope this deep dive into some of the issues around these tools has both alerted you to the dangers and inspired you to avoid them as you work to master these powerful new tools and seize upon the ever changing opportunities for content creators.
Oh, and one last bit of fun. I asked ChatGPT to suggest titles for this post. I like mine better, so I asserted my “human involvement” and kept it. But I thought you might enjoy seeing what the AI produced (feel free to weigh in on whether you like any of these better, too!):
- “Navigating the Risks of ChatGPT: How to Avoid False and Outdated Answers”
- “Don’t Fall for ChatGPT’s Traps: Staying Safe in the World of AI Assistance”
- “When ChatGPT Goes Wrong: A Guide to Avoiding Inaccurate Responses”
- “The Pitfalls of ChatGPT: Protecting Your Business from Misinformation”
- “The Pros and Cons of ChatGPT: How to Use it Safely and Effectively”
- “ChatGPT and the Problem of False Answers: How to Ensure Accurate Information”
- “ChatGPT’s Limitations and How to Overcome Them for Business Success”
- “ChatGPT: Friend or Foe? Strategies for Avoiding Inaccurate and Outdated Responses”
- “The Dark Side of ChatGPT: How to Safeguard Your Business from Risky Responses”
- “Maximizing the Benefits of ChatGPT while Minimizing the Risks of Inaccurate Answers”
5 Comments
Leave your reply.