Menu

AI-Triggered Misinformation Clouds Key Events Following Shooting Incident

2 months ago 0

The spread of altered images and videos using artificial intelligence has significantly increased on platforms such as Facebook, TikTok, Instagram, and X, particularly regarding the tragic shooting of Alex Pretti by federal officers in Minneapolis. Since the event last weekend, AI-manipulated media has proliferated rapidly, complicating the understanding of key details about the incident on social media.

Unlike many deepfakes that are easily recognized as unrealistic, some AI-altered depictions of Pretti’s final moments appear grounded in reality due to their use of verified images. This creates confusion and misleads audiences online. Despite growing awareness of AI’s capabilities, there is still considerable online skepticism extending to authentic media, with false claims suggesting that genuine photos and videos of Pretti have been altered by AI.

One particular AI-modified image, showcasing Pretti collapsing as a federal officer stands behind him with a gun, has been viewed over 9 million times on X. Despite a community note clarifying its AI-enhanced nature, the image features uncanny details, such as a headless ICE officer. This image was also displayed by Sen. Dick Durbin, D-Ill., during a Senate speech without realizing its inauthenticity. The senator’s office later acknowledged the mistake, noting that the image had been inadvertently circulated in its edited form.

Moreover, videos, such as an AI-generated TikTok video of an interaction between Pretti and an ICE officer, and a Facebook video showing a police officer discharging Pretti’s weapon, have been widely shared. These videos, also labeled as AI-modified by community notes, further contribute to the mistrust experienced by consumers trying to discern truth from fabrication. Despite the spread of these AI-created media, it remains unconfirmed whether the officer in question indeed fired Pretti’s gun.

Ben Colman, co-founder and CEO of Reality Defender, a company specializing in detecting deepfakes, described the widespread dissemination of AI-involved media as concerning yet unsurprising. He noted that AI-modified images intending to reveal the identity of officers in other recent Minneapolis shootings have similarly circulated, prompting online misidentifications. Colman criticized the crude approximations of these deepfakes, emphasizing how they fail to truly enhance or unmask the individuals involved.

Furthermore, the circulation of AI content led to false claims about authentic videos of Pretti, which experts fear could contribute to the liar’s dividend—a scenario where genuine media is dismissed as AI-generated, causing distrust and diminishing accountability. NBC News independently verified three videos showing an altercation between Pretti and federal agents, yet some social media users labeled these clips as AI-generated.

One verified video, showing Pretti kicking an agent’s vehicle before being subdued, was confirmed by his family. A witness who captured another verified video even recounted hugging Pretti post-altercation. The availability of tools for accurately determining AI manipulation in content remains limited for consumers, as illustrated by multiple misleading replies from X’s AI assistant, Grok, regarding footage authenticity.

This AI-induced misinformation build-up around Pretti’s case, compounded by misidentifications from right-wing influencers, exemplifies the growing challenges posed by advancing AI technologies. These technologies have increasingly facilitated creating deceptive high-quality images and videos that obscure the lines between what is real and what is fictional.

Jared Perlo, a fellow covering AI with the Tarbell Center for AI Journalism, alongside Marin Scott and Bruna Horvath from NBC News, contributed insights on this issue.

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *