Published
3 weeks agoon
By
bindu
The US Federal Trade Commission said in a study released on Thursday that social media corporations collect, distribute, and process massive amounts of information about their users with little transparency or control, including how it is utilized by systems using artificial intelligence.
The paper examines how Meta Platforms, Byte Dance’s TikTok, Amazon’s game platform Twitch, and other companies handle user data, concluding that many of the companies’ data management and retention policies were “woefully inadequate.”
The FTC study also mentioned YouTube, social networking site X, Snap, Discord, and Reddit, however the conclusions were anonymised and did not expose specific firms’ activities. Alphabet’s Google owns YouTube.
Discord, a communications platform, claims the report groups many different revenue models together and that it did not offer advertising at the time the study was done.
According to an X spokesman, the assessment is based on procedures from 2020, when the site was known as Twitter, and which X has since changed.
“X takes user data privacy seriously and ensures users are aware of the data they are sharing with the platform and how it is being used, while providing them with the option of limiting the data that is collected from their accounts,” a spokesman said.
According to the spokesman, just roughly 1% of current X users in the United States are between the ages of 13 and 17.
Other businesses did not immediately respond to requests for comment.
According to the FTC, social media businesses collect data through monitoring technologies used in online advertising, as well as by purchasing information from data brokers.
“While lucrative for the companies, these surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms, from identity theft to stalking,” Lina Khan, chairman of the FTC, stated.
Data privacy, particularly for children and teens, has been a contentious topic. The US House of Representatives is reviewing proposals passed by the Senate in July to address the effects of social media on young people. Meta just introduced teen accounts with improved parental controls.
Meanwhile, Big Tech companies have been scurrying for data sources to train their burgeoning artificial intelligence technology.
The data arrangements are rarely publicized, and they usually involve private content that is hidden behind paywalls and login screens, with little or no warning to the users who created it.
In addition to gathering information about how users interact with their services, the majority of the companies assessed by the FTC collected or estimated users’ ages and genders based on other data.
According to the FTC, some collected data on users’ income, education, and family status.
Companies collected data on people who never used their services, and some were unable to identify all of the ways they collected and used data, according to the FTC.
On Thursday, advertising industry groups criticized the research, claiming that customers realize the importance of ad-supported services.
“We are disappointed with the FTC’s continued characterization of the digital advertising industry as engaged in ‘mass commercial surveillance,’” said David Cohen, CEO of the Interactive Advertising Bureau, an advertising and marketing organization that includes Snapchat, TikTok, and Amazon.
In today’s fast changing digital landscape, social media platforms have become critical to how people connect, share information, and even express their identities.
However, as these platforms get more complex, particularly with the introduction of artificial intelligence (AI), worries about user privacy and data control have grown.
According to the Federal Trade Commission (FTC), social media users have increasingly little control over how their data is collected, processed, and used by AI systems.
Artificial intelligence has transformed the way social media networks operate. From tailored content recommendations to targeted advertising, AI systems analyze massive quantities of user data to improve user experience and engagement.
These algorithms construct extensive profiles based on user activity such as likes, shares, comments, and browsing history.
These profiles are then utilized to anticipate user preferences and personalize content. While this can improve the user experience, it creates serious privacy problems.
The FTC has raised concerns about the opaque nature of data collecting on social media networks. Many people are unaware of the extent to which their information is gathered and exploited.
Even when users provide consent, it is sometimes accomplished through lengthy and convoluted terms of service agreements that are rarely completely understood.
As a result, consumers unintentionally give social media firms broad access to their personal information, with little awareness of how it will be utilized.
The FTC highlights that this lack of openness impairs users’ capacity to make educated decisions regarding their data. While social media networks may claim that users have accepted to data gathering, the reality is that this permission is frequently not fully informed.
This problem is exacerbated by the fact that AI-powered data collection and processing are constantly developing, making it difficult for users to understand how their data is being utilized.
The application of AI to process user data on social media has far-reaching consequences. One of the most fundamental worries is that AI systems may reinforce biases and spread disinformation.
Because AI systems are educated on current data, they may unintentionally learn and replicate biases contained in that data. This can amplify negative content, propagate disinformation, and perpetuate stereotypes.
Furthermore, AI’s predictive powers can be leveraged to subtly affect user behavior. AI algorithms, for example, can be used to personalize content or adverts for consumers based on their data profiles.
This raises ethical concerns regarding manipulation and a loss of user autonomy. The FTC has cautioned that such tactics can have major implications, especially in areas like as political advertising and the dissemination of fake news.
According to the FTC research, individuals have little control over how AI algorithms use their data on social media platforms.
While users can alter privacy settings and limit the amount of data they disclose, these safeguards are sometimes insufficient to adequately secure their privacy. Furthermore, once data is acquired, users have little to no say in how it is used, shared, or sold to third parties.
The FTC contends that there is an urgent need for stricter rules to protect user privacy in the age of AI. This involves increased transparency from social media businesses regarding their data gathering activities, as well as more severe criteria for getting users’ informed consent.
The FTC also advocates for the creation of ethical rules for AI use, particularly in areas where there is a danger of harm, such as content moderation and targeted advertising.
Empowering people to take control of their data is critical for tackling the issues raised by AI-powered social media sites.
This can be accomplished through a variety of approaches, including improved privacy settings, user education, and the development of tools that help users manage their data more effectively.
For example, social media platforms may provide users more discretion over what data is gathered and how it is utilized. Additionally, platforms might provide more detailed explanations of their data practices and the ramifications of AI use.
Educating people about the risks and benefits of AI-powered devices is also essential for allowing them to make informed decisions.
To summarize, the rise of AI in social media has provided enormous benefits while also posing serious concerns to user privacy and data governance.
The FTC’s warning emphasizes the importance of more openness, regulation, and user empowerment in ensuring that the benefits of AI are realized without jeopardizing user rights.
As AI continues to impact the future of social media, it is critical that users have control over their personal data and are protected from the risks connected with AI-powered technology.