Meta is ‘Evaluating’ After Backlash Against Instagram’s ‘Made With AI’ Tags
With growing fury against its hamfisted approach to labeling photos as “Made with AI”, Meta says it is “evaluating” its approach.
A spate of confusing labels have been attached to genuine photos recently such as a picture of a winning cricket team in India lifting a trophy that was labeled as AI.
While former White House photographer Pete Souza had one of his old 35mm scans slapped with the tag. Souza hypothesizes to TechCrunch that Adobe has changed how the cropping tool works and he now has to “flatten the image” before saving it as a JPEG.
Photographers are at a loss to explain what triggers the AI marker that appears with seemingly no rhyme or reason. The AI tag understandably infuriates some photographers as generative AI is associated with the unsanctioned training of their work; a controversial practice that is currently being played out in courts.
Earlier this week, PetaPixel attempted to find out exactly what is triggering the AI label. Tools such as Generative Fill on Adobe Photoshop apparently precipitate the label; even if used for a minor adjustment. But then some users report that using Generative Fill doesn’t activate it.
This confusion and inconsistency is a problem for photographers and now Meta has responded to the contention.
“Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson said per The Independent.
“We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent.”
The acknowledgment that Meta is considering its approach is an indicator that perhaps things will change sooner rather than later. Meta is presumably relying on the metadata attached to an image that contains C2PA flags — a technical standard that is supposed to certify the provenance of an image and show if it was AI-generated.
But whatever Meta’s system is for detecting AI content, it is clearly not working properly in its current form.
Image credits: Header photo licensed via Depositphotos.