Musk Removes the Block Feature on X: The Reasons Behind This Change and Its Possible Dangers.
The tool has been the first resource to combat harassment.
The change proposed by Elon Musk on the X platform is raising concerns among users, as it will alter the account blocking function. Under this new scheme, blocked accounts will be able to view a user’s public profile and posts but will not be able to interact with them. Musk has expressed his disagreement with the current blocking and moderation system, arguing that it is ineffective, as problematic users often circumvent these restrictions through alternative accounts or private browsers.
Despite his stance, blocking has been an initial tool for protecting against online harassment. Although it is not a foolproof solution and does not guarantee a user's safety, it serves as a first step in defending against abusers. Furthermore, its use along with other online security measures can provide a safer experience in the digital environment. The blocking function is also an essential component in social media applications available in digital marketplaces.
A month after Musk announced this modification, users encountered a formal notification on the platform, prompting outrage and criticism, as many argued that this decision undermines the true purpose of blocking and could encourage harassment and surveillance. They highlighted that the option to block is often the primary resource in situations of doxxing or online verbal abuse.
The change also poses a greater risk for communities and groups that are already vulnerable to harassment. According to GLAAD's 2024 Social Media Safety Index, X is identified as the most unsafe platform for LGBTQ+ users, a point that was echoed in 2023. Organizations like the Trevor Project emphasize the importance of at-risk users actively utilizing the blocking function.
Moreover, teenagers, who already face a significant risk of online abuse, may be further harmed. A report by Thorn indicates that young people rely on digital safety tools, such as blocking, to defend themselves in online spaces. They have been found to prefer using these tools over seeking help in their real-life environments, suggesting a reliance on fragmented policies and potentially creating barriers to communication about their experiences.
The X platform is in a complicated position, as it has failed to adequately address concerns regarding content moderation. As the amount of child sexual abuse material generated by artificial intelligence increases, accessibility to youth accounts and the media that publishes such content becomes even more problematic. Despite the pressure on social media platforms to improve online safety and mental well-being, other platforms have implemented more comprehensive safety tools, such as Instagram's "Limits" and "Teen Accounts" features, which create more barriers between young users and strangers in the digital environment.