Meta is ‘Evaluating’ After Backlash Against Instagram’s ‘Made With AI’ Tags

Instagram and Facebook users suing Meta small claims court hacked lost accounts

With growing fury against its hamfisted approach to labeling photos as “Made with AI”, Meta says it is “evaluating” its approach.

A spate of confusing labels have been attached to genuine photos recently such as a picture of a winning cricket team in India lifting a trophy that was labeled as AI.

A cricket team dressed in purple uniforms celebrates winning a championship. They are all holding up a trophy, smiling, and cheering. The background is filled with bright pink and purple lights, creating a festive atmosphere. The caption speaks to their hard-fought victory.
This scene really happened.

A black-and-white photo of an NBA game showing Celtics player Larry Bird celebrating on the court as fans in the stands and other players cheer. The scoreboard in the background displays "15 sec." The caption mentions this is from Game 7 of the NBA Finals between the Celtics and Lakers.

While former White House photographer Pete Souza had one of his old 35mm scans slapped with the tag. Souza hypothesizes to TechCrunch that Adobe has changed how the cropping tool works and he now has to “flatten the image” before saving it as a JPEG.

Photographers are at a loss to explain what triggers the AI marker that appears with seemingly no rhyme or reason. The AI tag understandably infuriates some photographers as generative AI is associated with the unsanctioned training of their work; a controversial practice that is currently being played out in courts.

Earlier this week, PetaPixel attempted to find out exactly what is triggering the AI label. Tools such as Generative Fill on Adobe Photoshop apparently precipitate the label; even if used for a minor adjustment. But then some users report that using Generative Fill doesn’t activate it.

This confusion and inconsistency is a problem for photographers and now Meta has responded to the contention.

“Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson said per The Independent.

“We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent.”

The acknowledgment that Meta is considering its approach is an indicator that perhaps things will change sooner rather than later. Meta is presumably relying on the metadata attached to an image that contains C2PA flags — a technical standard that is supposed to certify the provenance of an image and show if it was AI-generated.

But whatever Meta’s system is for detecting AI content, it is clearly not working properly in its current form.


Image credits: Header photo licensed via Depositphotos.

Discussion