Social Media Companies Claim Ability to Determine What is and Isn’t AI, Label It Accurately

Social Media Companies Claim Ability to Determine What is and Isn’t AI, Label It Accurately

Image Source

Andrew Anglin

Probably, there is already no way to tell AI content from real content. Certainly in the near future, this will be impossible.

Therefore, if social media companies wish to do so, they can label certain real content as “AI generated,” as well as allow certain AI content to pass as real.

It is totally absurd to imagine that anyone will be able to tell the difference, so these social media companies establishing themselves as an authority that can tell you which is which is going to lead to mass abuse.

RT:

Meta will start labeling AI-generated content on Facebook and Instagram from May onwards, the tech giant has announced. Until now, the company had a policy of deleting such computer-created content.

The company will apply “Made with AI” labels to photo, audio, or video content created with artificial intelligence, it explained in a blog post on Friday. These labels will either be applied automatically when Meta detects “industry-shared signals” of AI content, or when users voluntarily disclose that something they post was created with AI.

If the content in question carries “a particularly high risk of materially deceiving the public on a matter of importance,” a more prominent label may be applied, Meta stated.

At present, Meta’s ‘manipulated media’ policy only covers videos that have been “created or altered by AI to make a person appear to say something they didn’t say.” Content violating this policy is removed rather than labeled.

The new policy expands this dragnet to videos showing someone “doing something they didn’t do,” and to photos and audio. However, it is more relaxed than the old approach in that the content in question will be allowed to remain online.

Because it would be ridiculous to delete it all. And they’re not doing that now.

Most image content you see on social media is already AI. So, it will be most videos.

Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos,” the company explained. “In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving.”

Meta is not the only Big Tech firm to combat artificial content with labels. As of last year, TikTok asks users to label their own AI-generated content, while giving other users the option to report content they suspect was AI-generated. YouTube introduced a similar honor-based system last month.

With pivotal elections taking place in the EU in June and US in November, lawmakers have pushed tech firms to take action against AI-created “deepfakes,” which they argue could be used to deceive voters. Earlier this year, Microsoft, Meta, and Google joined more than a dozen other industry leaders in promising to “help prevent deceptive AI content from interfering with this year’s global elections.”

Remember, these are the same people who blocked Hunter Biden’s laptop, claiming it was “Russian disinformation.” This AI labeling allows them to do that on a massive scale, with every little detail they want to manipulate.

Along with abuse, there will also be basic confusion. These companies are saying they think they can tell Heaven from Hell, blue skies from pain. But can they tell a green field from a cold steel rail? A smile from a veil?

I don’t think they can tell.

Original Article

Author

Leave a Reply

Your email address will not be published. Required fields are marked *