Exploring AI Part 2: limitations and legalities


Content alert: the following was written by a human.

In this three part series exploring AI, Books Forward is chatting with Dr. Andrew Burt, author of lots of published science fiction, including his newest novel, “Termination of Species”, for those who like AI, biotech, chess and a bit of romance.

Dr. Burt was VP of the Science Fiction and Fantasy Writers Association (SFWA) for several years. He heads Critters, the first writers workshop on the web and home to other writerly resources. He runs ReAnimus Press and Hugo-winning Advent Publishers, helping award-winning and bestselling authors breathe life into great books. Outside of writing, he’s been a computer science professor (AI, networking, security, privacy and free-speech/social issues); founder of Nyx.net, the world’s first Internet Service Provider; and a technology consultant/author/speaker. For a hobby, he constructs solutions to the world’s problems. (He jokes: Fortunately, nobody listens.)

PART 1: Changes in publishing

What can AI NOT do?

There are plenty of non-writing related aspects that AI won’t help with; at least for now.

The current breed of AI is focused on creating content, not so much on finding answers to questions or planning. So ChatGPT isn’t going to be great at finding a list of places to advertise, but it can help write ads. It could regurgitate ideas for marketing plans that it’s found that others have written, but AI’s can’t yet really plan such things. Remember, it’s all based on the probability of what word could come next in an answer.

That’s just today’s AIs. Nobody foresaw ChatGPT’s capabilities just a couple years ago; it just popped up based on AI people trying things with huge amounts of data. (And, frankly, getting surprised at the result.) Tomorrow’s AIs… who knows. That’s one reason I wrote a novel about where AIs and humans might be going.

What should authors be wary about when it comes to using AI?

If a newbie wants to use AI to write their whole book/story/article, then it won’t really be “theirs”; it won’t be their own artistic creation. If the goal of a certain author is to breathe life into their own artistic creation, then the more they use AI the less they’re doing that.

If an author’s goal is to make money, by quickly creating some particular text, then AI is getting pretty close to that in a number of areas. Again, the shorter the text, the better AI will be at it. A lot of AI generated text won’t be salable, though; so don’t bother trying to get rich quick by sending in a bunch of AI stories or books to get published. (As many people are doing, clogging the slush piles of publishers. This ultimately hurts new authors, as publishers shut their doors to slush and only accept work from authors they know, or via agents or other forms of vetting.)

If an author does want to create “art,” they should minimize the use of AI. That applies to any use of it, for idea generation, editing for length, smoothing out word choices, or critiquing—any of that reduces their artistic input.

A big problem with AI content is that it often contains factually incorrect information. So, never rely on AI content to be correct. The technical term for it is “hallucination,” but the lay term for it is “lying.” I wrote a couple blog posts about this showing how insidious their lying is. Generative AI’s are structurally incapable of telling true from false information (they are literally making randomized guesses at what the next word in a sentence might sound good), so I’ll repeat this: Never rely on them for factual accuracy.

Another pitfall to be aware of with AI created text is that it tends to have a certain style about it. It can often come off as bland, corporate, uninspired or generic. AI’s are, after all, mimicking the sort of “average of everything ever written” in their approach. Even if you ask it for a certain style, the way these AIs work is by looking for the most generic output. They look for the “most likely” next word that follows the words they have so far. Then they look for the “most likely” next word after that, and the next. This inevitably produces a sort of non-unique style, or if you ask it for a specific style, like “write like a pirate”, you get a generic version of a requested style.

To further keep you up at night, there may be obvious or unknown biases in AI output. (Gender, race, ethnicity, etc.)

And some readers may react negatively if they know or think you’ve used AI. For example, people have won awards based on what turns out to be AI generated content. People ask, how deserving are they? How much of the award-winning aspect is their own, vs. software created?

What are the legalities surrounding AI?

From a legal standpoint, it’s unclear if there are copyright infringement issues at play. The current batch of AIs were “trained” by “reading” massive amounts of (copyrighted) material. If they then spit out some text, there’s a question whether it’s either a direct copy of some of the input text (unlikely, but not impossible), or a close enough derivative of it, that some author of the original text could find out and bring legal action about it. Such lawsuits are already happening. (Whether they win or lose, defending yourself against a lawsuit is costly and time consuming.) Some authors contend that merely using their work as input without permission is illegal (thus anything it creates as output they contend is likewise illegal). It will be for the courts to decide this since it’s such unexplored legal territory. There’s no law against copying an artist’s style (freedom you have to like Yoda write); you just can’t copy others’ specific artistic creations.

It’s also unclear if AI-generated or AI-assisted text itself can be copyrighted. So far the answer has been “No” on AI generated text. So if you use AI to write a book, article, etc., it may not be something you can copyright—that is, anyone else may be able to copy it for free. Using AI for assistance? Totally unknown copyright issue.

As a final thought, Amazon has been asking authors if their KDP content was produced with assistance from AI. It’s unknown what they’ll do with this information, but it’s possible they’ll refuse to publish such works (as they now do with books that contain whatever they deem too much public domain content).