How to Recognize Videos Generated by Artificial Intelligence
Everything revolves around literacy in artificial intelligence.
The identification of videos generated by artificial intelligence (AI) has become increasingly difficult, as current tools can create highly realistic videos with just a few clicks. Although AI-generated videos are still a developing category compared to other formats such as text, images, and audio, it is clear that they will improve rapidly. According to Siwei Lyu, a professor at the University of Buffalo, there are no fundamental obstacles to obtaining high-quality data, only the labor-intensive work required to achieve it.
The key to recognizing these videos lies in having a good understanding of AI capabilities. Lyu emphasizes the importance of realizing that what is seen could be generated by artificial intelligence, rather than relying on individual cues. This suggests that AI literacy is crucial in an era where misinformation can be present in every corner of the web.
There are different types of AI-generated videos. Deepfake videos are those that alter faces, replacing one person's (often a public figure's) face with another's, and may include the manipulation of mouth movements to synchronize with different audio. Due to the technology used, these videos are often quite convincing. It is recommended to watch the video's format; deepfakes often have a "talking head" style. Additionally, one should observe the edges of faces for any defects that may indicate manipulation.
On the other hand, text-to-image diffusion models generate videos based on written descriptions or images. One of the most prominent examples is OpenAI's Sora video generator, which has impressed with its ability to produce visually appealing material. However, the technology is not yet fully developed, and often, the videos require meticulous editing to achieve a satisfactory result, which can make brief clips a red flag regarding authenticity.
With technological advancements comes concern about misinformation. The veracity of a video can be difficult to verify, especially if it involves less-known figures. AI literacy becomes essential for investigating and verifying content before accepting its authenticity.
Finally, it is crucial to be cautious about the information consumed and shared on social media, always seeking other sources that can corroborate the authenticity of a video or image. As Lyu suggests, having a critical mindset and questioning the origin of information is fundamental to combating fake news, as well as developing the ability to identify AI-generated content.