Time to Call BS on Using AI to Create Content
As AI goes more mainstream we need consensus as to what is deemed acceptable use and what is not.
AI was one of the key themes at the recent SaaStock conference in Dublin 🇮🇪 .
However, the use cases are wide ranging and we are already seeing some companies promoting use cases with what economists would call negative externalities.
Using AI to mass produce content is one of those. Flooding an already busy online world with AI generated content is one of those examples where the use of AI negatively impacts others.
Google should announce that they will penalise AI generated content that seeks to game SEO. Afterall, if Google's aim is to surface those writing to help others with authoritative content then it should disincentivise those using AI to mass produce what is essentially regurgitated content.
Medium CEO Tony Stubblebine recently wrote that "AI companies have nearly universally broken the fundamental rules of fairness. They are making money on your writing without asking for your consent, nor are they offering you compensation and credit. There's a lot more one could ask for, but these '3 Cs' are the minimum." He goes on to claim that "AI companies have leached value from writers in order to spam Internet readers."
He is correct.
As AI starts to accelerate we need to have discussions about what is deemed acceptable and what is not. Using AI to mass publish SPAM content in an attempt to game SEO crosses the line. We are back in Garret Hardin territory and The Tragedy of the Commons - if we act in our own self interest we will destroy it for all. AI produced content increases the noise for us all, is essentially piggy backing on the work of others (without attribution), and makes it harder to fact-check to name but three of the issues with it.
About the Author
Alan Gleeson is the CEO and Co-Founder of Contento, a modern Content Management System.

