The Instagram Director of Meta: In the Age of Artificial Intelligence, What Matters is Who Publishes Something.
Por supuesto, por favor proporciona el resumen que deseas traducir al inglés.
In a series of posts on Threads, Adam Mosseri, head of Instagram, stated that users should not trust the images they see on the internet, as it is evident that artificial intelligence is generating content that can easily be confused with reality. Mosseri emphasizes that it is crucial to consider the source of information and suggests that social platforms should assist in this regard.
The executive points out that “our role as internet platforms is to label AI-generated content as accurately as possible,” although he acknowledges that “some content” may go undetected by those labels. Therefore, it is essential for platforms to also provide context about who is sharing the information so that users can assess how much trust to place in it.
Just as it is important to remember that chatbots can provide incorrect information, checking whether claims or images come from a reliable account can help judge their veracity. Currently, Meta's platforms do not offer much of the type of context that Mosseri is suggesting, although the company has hinted at significant changes to its content rules.
What Mosseri proposes seems to lean more towards user-guided moderation, as seen in X's Community Notes or Bluesky's custom moderation filters. It remains unclear whether Meta plans to implement a similar system, although it has previously shown that it can draw inspiration from Bluesky's proposals.