|The Press Association can write 30,000 local news stories per month using AI. / Photo by rawpixel via 123rf|
Do you believe that anything and everything could be automated with AI? If so, don’t be surprised when many media organizations such as the Associated Press, The New York Times, Reuters, Washington Post, and Yahoo! Sports use AI to create content, said Bernard Marr of business news platform Forbes. For instance, The Press Association can write 30,000 local news stories per month using AI.
Aside from news stories, AI can also be used to write short fiction or to create music, explained Chris O’Brien of Venture Beat, a technology news website. We like to believe that creativity and emotions are primal urges that define our humanity. Using AI to replicate these traits will help close the gap between machine and humans— but at what cost? Fake news?
Krystin Tynski’s Experiment With AI-Generated Content
VP of digital marketing firm Fractl Krystin Tynski sees an opportunity in using AI as a way to boost creativity. However, a recent experiment involving AI-generated content “left her a bit shaken.” For her experiment, she used publicly available AI tools and within an hour, Tynski managed to create a website called ThisMarketingBlogDoesNotExist.com. The site contains 30 highly-polished blog posts, including an AI-generated headshot for the faux authors of the posts.
Tynski’s intention was to spark conversation about the site’s implications. But the experiment gave her a sneak peek of a potentially grim and ominous digital future, making it more impossible to differentiate reality from fiction. This shifts the delicate balance of power between search engines, creators, and users.
The abundance of fake news and propaganda all over the internet are already fooling many people. Digital platforms are trying to weed them out, but automating content creation could prevent journalists or brands from connecting with their audience that distrusts search engine results, believing every content they see is fake.
Alarmingly, AI could be weaponized to unleash a deluge of propaganda that could sever the bond between the governments and citizens. According to Tynski, the era of “high-quality, AI-generated text content” could befoul search engines and the internet with garbage. Google could struggle to determine if a particular content was mass-produced or not. Even if it’s possible, it would take immense time and resources.
|Alarmingly, AI could be weaponized to unleash a deluge of propaganda that could sever the bond between the governments and citizens. / Photo by Vitaliy Vodolazskyy via 123rf|
The Process of AI Content Creation
The natural language generation (NLG) is a software that automatically generates a “written narrative from data.” It is used for business intelligence dashboards, personalized email, business data reports, in-app messaging communication, client financial portfolio, and more.
The first step is to determine the content’s format. Do you want to create a social media post or a poem? Each one has its own distinct style and format. The narrative design or the template is made by the end-user, the NLG solution, or by the software provider.
Structured data is fed into the NLG tool, processing it via “conditional logic,” which is part of the narrative design. The goal is to produce output that sounds like a “human-generated content,” not an incomprehensible mess.
Why Are Companies and Organizations Investing in NLG Tools?
NLG enables firms to efficiently process and create more data sets. Hence, organizations can produce thousands of narratives within a small timeframe. NLG also enables “complex personalization at scale,” which can be beneficial for customers. For example, the 401k quarterly summaries you receive are likely written by NLG. But the summaries are personalized, as they use your “unique set of information.”
NLG is also used to cut time and resources. German bank Commerzbank uses AI to write equity research reports. Even if the process is not completely automated, the AI can perform 75% “of what a human equity analyst would have done.” The Associated Press utilizes AI to create thousands of sports reports. The AI can scan the data and draw insights from a game that is deemed as important for the reader.
Do We Even Know What’s Real?
Earlier this year, OpenAI announced its new powerful language software that was so fluent it could go on par with a human’s capability to write text. However, OpenAI said it would not release the software for fear, as it is worried it would be abused to create fake content. Tynski echoed that such tools can be used for “nefarious purposes.”
It’s hard to create authentic content when the internet is polluted with bots on social media platforms and overseas clickfarms, where workers produce copy for pennies. Imagine reading a fluent and well-written article about any topic you like. Would you really know if it was made by a human or an AI? AI and NLG tools are within anyone’s reach. It can be argued that it can further democratize content creation, while others are worried about using these tools to proliferate fake news
Companies like YouTube, Twitter, and Facebook are attempting to “stem the tide of fake news and propaganda” using their own AI and human teams to curb their increase. But fake news and propaganda are still winning.
Overall, NLG and AI can help us generate content quickly, making you wonder if these tools can trump human creativity. Alternatively, AI can be abused to propagate fake news and propaganda. In this post-truth era, we don’t know what is real from fake. There are tools that help determine fake from genuine content, but we must still exercise caution.