Friday 18th October was Wordfest OC at Saddleback College.
The day featured poetry readings, a talk on Latinx voices, a conversation with Gustavo Arellano, a workshop on creative writing headed by Sarah Rafael Garcia and a discussion on artificial intelligence, DIGITAL IMAGINING: Creative Writing in the Age of AI.
I was particularly interested in the last one. The panel included my Creative Writing teacher Barbara DeMarco-Barrett along with Brett Myron, Amy Cameron and Rylee Robles.
Barbara's main point was that AI (or rather large language models, LLMs) is being 'trained' to write by hoovering up every book, article, blog post, and Reddit post along with any other digitized text. The texts are (in all modern cases) under copyright, and no one was paid for their participation - and not even informed about the project taking place. The products of this surreptitious activity are being sold for money.
Ethically, writers are owed compensation for this unpaid work. Barbara mentioned that Amazon (the bookseller) now limits individuals to uploading three books a day in order to at least slow down the flood of machine-generated stories and novels. Soon there will be a mass transfer of wealth from current copyright owners and future writers to the owners and users of "Generative AI."
Whether a LLM "reading" a text for its own use constitutes unlawful copying is yet to be decided by the courts, but cases of an LLM's output matching the input closely, i.e. plagiarism, has already been detected and apparently court cases are proceeding.
Amy talked about Narrative Magazine, where co-founder Tom Jenks has banned people submitting machine-generated text and updated the magazines robots.txt file to disallow web crawlers from 'hoovering' the magazine's contents. I submit to a lot of speculative fiction magazines, and can confirm that most have strong policies banning AI-written submissions. But how to recognize and prevent it? Amy said that at UC Irvine, use of AI is not "policed" but she can recognize it because it "sounds robotic." She admitted that a time may come when she would be unable to recognize machine output.
Rylee, a high school senior, took a different tack. She is a Marvel fan and asked an AI if it could put her in a Marvel movie--and it made her a superhero. She said that her group created characters with AI and within 45 minutes they had produced three stories that gave representation to everyone in the room. As a base layer to build off, she said, it helped people create their stories.
The panel discussed whether AI could be a valued part of the creative writing process in that way, by producing a "kernel of an idea." Barbara said prompt generators (programs that give you ideas for characters, setting and/or plot) had been around for years, and AI was an "unnecessary" improvement. Amy said "rising waters lift all boats" and Google Search had raised the standard for good and mediocre writers alike. She felt that AI would do the same, and it was important "to have a boat in the water" -- get used to generative AI -- or be left behind.
There was a general agreement that current AI models produced output that lacked nuance, did not adequately express human emotion, and in Brett's case, was "not weird enough."
During the audience question time, a listener mentioned that sometimes an apparently clear prompt will produce an AI output that "goes in the wrong direction." Rylee answered that sometimes human communication does that too, and the opportunity to rephrase and redirect the AI is good practice for clear communication in general.
And on that high note, I'll finish.
1 comment:
I hadn't, but I have now. Good old Cory - he always gets to the heart of the matter. I firmly agree that a publisher writing "No part of this book may be used in training an AI" takes more rights FROM the writer than it gives TO the writer. Can't even sell the AI rights yourself...
Post a Comment