A report released this week looks at the impact AI is having on the novel, and those who (up until now) write them. Impact of Generative AI on the Novel is by a team of independent researchers, working at Cambridge University, who are engaged on a mission of ‘radically rethinking the power relationships between digital technologies, society, and our planet.’
The report says that the UK government has so far focussed on the potential of AI for increasing ‘Growth’, without considering the detrimental effect it might have on the creative industries. Of course, successive UK governments have blindly chased ‘Growth’, in the ignorant and rather stupid assumption that it’s always a positive thing, so perhaps it’s no surprise they should view AI with the same rose-tinted spectacles.
The results of the survey on which the report is based suggest that the majority (67%) of novelists don’t use AI, and that those who do tend to use it only for non-creative tasks. They give the example of ‘information search’. I’ve never knowingly used AI, but I can’t understand why anyone would use it to search for information – isn’t that what search engines are for (unless a search engine is now considered to be a form of AI)?
The report says literary creatives are concerned about a loss of originality, as well as the risk of being accused of using AI when they haven’t. And of course, novelists are understandably concerned about being replaced by machines, with genre fiction being predicted as being most at threat (perhaps due to the more formulaic structure).
39% of novelists believed that their incomes had already been adversely affected by competition from AI generated books. Half of published novelists said they believed their work would, in time, be completely replaced by AI (a very worrying prospect). And 93% said they didn’t want their work to be used to train AI, but at the same time, more than half said they knew their work had already been used for just that, and in almost all cases, without their permission, or any remuneration. I was already aware of this thanks to a blogging friend, the successful, published author Damyanti Biswas (you can read about her upsetting experience of AI ‘scraping’ here).
Among other things, the report recommends that copyright law should be strengthened, and a licensing market implemented, where organisations who want to use existing work to train AI would have to both gain permission from, and provide payment to, the creators. This would of course require much more transparency from the tech companies, and much stronger regulation from government (which, in the current global political climate, seems unlikely).
For authors, as well as for other artists, the increasing use of AI looks to be a significant threat. For me personally, it suggests my chance of achieving success as a published author is slipping further away, however much my writing improves. But what about readers? If AI were to become so effective at producing novels that readers were unable to tell the difference, would it matter to them that a book was written by a machine, rather than a human being?
And for society generally? We’ve been through the information (overload) age and survived (albeit with rather more stress than we had before). But we seem now to be moving into a new age – a time when it’s increasingly difficult to know, not only what’s true, and what isn’t, but even what’s real, and what isn’t. What will that mean for us? And will society survive without a basic grip on reality?










