Sloppy ideas and dehumanizing the social media experience
The rise of generative artificial intelligence across various
industries is leading to less authentic human interaction on social media
trends that may be less obvious to younger individuals or those not active
online. The issue is especially apparent in AI-generated videos and images,
which often appear convincing on the surface—featuring realistic voices or
likenesses—but lack true authenticity or proper verification. The term "AI
slope" refers to content that seems authentic but does not meet
trustworthy or accurate standards.
For long-time internet users like me (online since the late
1990s), nothing about AI-produced material feels unique; it lacks the
intricacies and honesty that come from human touch. Instead, these works carry
perceived biases and implied truths rather than undisputed facts. For example,
I recently saw a YouTube video showing Oprah Winfrey defending Sean
"Diddy" Combs during his criminal trial—a testimony that never
actually occurred. There were those who believed the video was real since Oprah
had previously supported Combs.
This phenomenon is not exclusive to YouTube. On platforms like
Twitter (now X), the prevalence of AI-generated images and posts, particularly
those revolving around public figures such as President Donald J. Trump,
continues to grow. Mr. has created part of this material. This content is
frequently shared on Trump's social media accounts by him, his supporters, or
people in his political circle. These posts elicit significant responses;
certain individuals express surprise, whereas others demonstrate enthusiastic
support.
My concern is that this problem will worsen, especially as Elon
Musk plans to move all algorithms and oversight on X into the Grok AI system.
Similar issues are surfacing on other platforms too. Facebook frequently
promotes clearly fake AI-generated stories, which still prompt lively comment
threads. Sometimes I wonder if certain commenters are both AI-generated.
Authenticity on social media is at risk—not necessarily because of
artificial intelligence itself, but due to the way people prompt AI systems to
produce whatever content they desire and pass it off as fact. If unchecked,
this trend could lead to a decline in trust and reliability across social media
over time. To address this, we need consistent standards and reliable
authentication for accounts and content. Currently, X's paid authentication
model makes it easy to obtain verified status, but most "authenticated"
users tend to represent one political perspective, which undermines the
platform's integrity. For the health of online discourse, it is crucial to act
quickly to fix these problems.
That is just my opinion, though. I would welcome your
thoughts—feel free to comment on this paper and share what you think. Before I
sign off, I am linking to a recent video from John Oliver's "Last Week
Tonight," where he discussed the topic of AI slope. I found it insightful,
and I hope you will find it interesting too.
Talk soon,
Robert Kelly
AI literally scares me, mainly because of what it is becoming. It's on every social media platform and sometimes the videos are funny but it can be scary. I sometimes take a step back and ask myself is this AI or a real video/image? I liked how you talked about authenticity becoming a risk because no one wants to create original art anymore. Its sad that the world is turning into this... lets hope we wake up and realize that AI is doing more harm than good.
ReplyDeleteAI slop like you describe has truly taken over the internet in the worst way possible. It is at the point where it can't be avoided. Not just online anymore, I was recently at a Hobby Lobby and I saw what was unquestionably AI art for sale. It really disappoints me how prevalent it is and I am given at least some level of comfort knowing I am not along in this thought.
ReplyDelete