Does AI Content Undermine Ethical Communication?
Artificial Intelligence is one of the most exciting advances in human technology that I've seen in my lifetime. It has the capacity to fundamentally change the way in which we interact with the world around us. It can make our lives easier and more efficient in countless ways.
For communicators, though, it poses an ethical question. To what extent is it acceptable to use AI in the generation of content for public consumption?
This article was not written by AI, not because AI couldn't write (or, at least, draft) it but because I enjoy the process of committing thoughts to print. However, I did use an AI engine to create two other articles on a topic that I feel very strongly about - one titled 'Why is evolution wrong?' and the other titled 'Why is evolution right?'. Both used exactly the same settings - I entered the title, used the same keywords for both articles and then just accepted the recommendations form the AI engine with no changes. The whole process took less that 10 minutes for two articles.
To an untrained eye, both articles are moderately authoritative. The writing is fairly clear and precise. There are a few obvious signs that they weren't written by a human being (e.g. footnote references, missing graphics), but look closer and the factual errors start to jump out. For example:
Why Evolution is Wrong:
"[T]he earliest human remains date back to about 7 million years ago (about 6 million years after dinosaurs went extinct." The dinosaur extinction is confirmed to have happened about 65 million years ago.
"Robert Kofahl, Stephen Jay Gould, Jonathan Wells and Michael Denton are just a few of the scientists who support creationism." The late Stephen J. Gould was one of the most prominent evolutionary biologists and certainly did not support creationism.
Why Evolution is Right:
"Darwin's theory of natural selection holds that organisms vary genetically and some traits confer an advantage in their environment." Darwin had no knowledge of genetics. His theory is based entirely on physical observation and reason. Genetics was a later addition.
"There are a number of things that separate humans from other animals. We use tools to make our lives easier, we speak to each other, and we can even make fire!" The first two points are demonstrably incorrect. Many animals, from crows to apes to octopus have famously been observed using tools and, while animals don't use language as we do, they certainly speak to each other - vervet monkeys, for example, use different cries to indicate threats from different predators.
What I find most troubling about this, though, is that I generated two articles that did nothing to challenge my stated opinion and which both contained factual inaccuracies. Research is core to any piece of writing - an awareness of multiple points of view is critical in developing an informed and defensible position. At a time when false stories, 'alternative facts' and 'personal truths' dominate news cycles, communications practitioners need to to be even more alert to how they consume and create content.
From the perspective of ethical, professional practice, I think there are five things communications practitioners need to do with regard to AI-generated content:
Practice critical thinking: Don't just accept what you read as fact. Examine it from a neutral mindset. Are the claims supported by good evidence? Are the premises sound and valid? What alternative explanations exist? Assess what you are reading ad hearing through a critical lens to avoid accepting an inaccurate picture of what's going on around you. For a short 'how to on critical thinking take a look at 'Build Your Critical Thinking Skills in 7 Steps' on the Asana website.
Look for contrary voices: Actively discover and review opinions that are different from your own. They might not change your mind, but they help make sure you aren't just getting information from your own personal echo chamber.
Make yourself aware of the tell-tale signs of AI-generated content: Incomplete sentences, inconsistent language, internal inconsistencies can all betray generated content. There are also several online tools you can use to assess the likelihood that a piece of content is generated. See How To Check If Something Was Written with AI on Gold Penguin for a good overview.
If you are using AI, make it the start of the process, not the end: There's nothing wrong, in my opinion, with using an AI tool to create a first draft of a piece of content, but the generated piece should be the start of the process, not the end. Examine, rework, reframe and develop the piece until you are satisfied that it represents the best work you can do.
Be transparent: If you're using AI as part of your content creation process, say so. Whether or not AI generated content counts as plagiarism is a debatable point - for two points of view, look at 'Is Using Artificial Intelligence Plagiarism?' on Medium and 'Do AI Writing Tools Plagiarize? Can You Trust Them?' on the HyperWrite blog (disclosure: HyperWrite is an AI content generator but not the one I used in my example above). Disclosing that an AI tool was used as the starting point for a piece of content insulates you from charges of duplicity if you are challenged on its authenticity.
AI is a tool, and like any other tool it is neither inherently good nor bad. The ethicality of AI lies within the intentions of its user. As communications professionals, we should be the champions of ethical practice and to perform that role we need to be to be aware of how AI-generated content is used to affect the opinions and attitudes of our stakeholders.