A viral fake image of an explosion at the Pentagon caused a brief disturbance in the stock market, leading to discussions about the potential societal problems arising from generative artificial intelligence (AI).
The image, suspected to be AI-generated, was shared by various accounts, including a verified Twitter account falsely associated with Bloomberg News. Experts identified tell-tale signs of an AI-generated forgery in the image, highlighting the increasing sophistication and accessibility of AI programs that can create chaos.
#VantageOnFirstpost: A #fake AI-generated image of a #blast near the #Pentagon triggered a brief stock rout in the #US. It was also reported by a handful of media houses. How can you identify fake #AI images online? Is it time to regulate AI? @Palkisu tells you. pic.twitter.com/QEf2r8Ia1e— Firstpost (@firstpost) May 23, 2023
The image's timing, spreading after the U.S. stock market opened, triggered a ripple effect in the investment world. The S&P 500 experienced a temporary drop, and investors sought safer assets like U.S. Treasury bonds and gold.
Misinformation experts believe the fake image was created using generative AI programs, which produce realistic but often flawed visuals. Inconsistencies in the image, such as irregularities in the building, fence, and surroundings, indicated AI-generated imperfections.
While misinformation can be damaging when shared by outlets perceived as credible, the market's sensitivity to headline news amplifies its impact.