Asking journalism students not to use artificial intelligence in their work is like asking them “not to have sex”, a leading trainer has claimed.
Paul, who teaches data journalism at Birmingham City University, has produced a series of tips on adapting teaching to deal with students using generative AI and tools like ChatGPT.
In a blog on the subject, he addressed the subjects of plagiarism and attribution.
Paul, pictured, wrote: “Plagiarism is one of the biggest concerns about ChatGPT and similar tools: its ability to write plausible (if not entirely factual) material that isn’t word-for-word taken from an existing source makes it a tempting option for the potential cheat.
“But asking journalism students not to use ChatGPT is like asking students not to have sex: many will have already done it, and they’re going to be much more interested in explanations on how to do it well than in admonitions not to do it at all.
“Getting students (or guest speakers) to have a discussion about how they have used generative AI is a useful first step; then we can explain how they can get credit for using ChatGPT.
“That means, firstly, building confidence in attributing. Many universities, from Exeter and Newcastle in the UK to NYU and Monash, have already begun publishing guides on their websites on how to attribute the use of tools like ChatGPT.”
Offering further advice to trainers, Paul went on: “The first challenge in teaching about generative AI is that most people misunderstand what it actually is — so the first priority is to tackle those misunderstandings.
“One common misunderstanding is that generative AI tools can be used like search engines: type in a question; get an answer.
“But tools like ChatGPT are better seen as storytellers — specifically, as unreliable narrators, with their priority being plausible, rather than true, stories.
“This doesn’t mean that the output of generative AI tools should be disregarded, in the same way that some people tell journalists not to use Wikipedia.
“It just means that it is an opportunity to introduce general journalistic practice in verifying and following up information provided by a source.
“A useful analogy here is the film ‘based on a true story’. You’d want to know which elements of the film were true, and which were added for dramatic effect, before believing it.”
Paul’s comments came amid fresh concern about the use of AI in UK journalism following HTFP’s investigation into the Bournemouth Observer, a new website set up using fake journalist profiles.
The site, which plans to launch in print, used images from a stock picture archive to illustrate a series of bizarrely-written profiles of its ‘journalists’ including ‘esteemed editor’ David Roberts and ‘middle-aged journalist’ Simon Foster.
Paul Giles, a representative of the title, was unable to confirm that they were real people, or whether AI had been used in compiling the stories on the site.
He claimed in an email that the Observer boasted a “powerhouse editorial team” with a “wealth of experience” – but HTFP readers familiar with its Dorset patch said they had never heard of any of them.