Friday, November 8, 2024

After TikTok, EU to scrutinise US-based social media apps for data privacy, safeguards against AI

Must read

Social Media platforms are set to face increased scrutiny now, especially on how they deal with privacy and GenAI. Image: AFP

Now that the US House of Representatives have passed a bill which may potentially ban TikTok, or force its owner ByteDance to sell its stake in the social media platform, other social media platforms too will be under scrutiny by the EU mainly in regards to how they deal with user data and While TikTok is currently the primary focus of scrutiny regarding alleged ties to the Chinese government, it is not alone in facing international censure.

Over the past year, tech giants including Amazon, Meta (formerly Facebook), Apple, and others have found themselves embroiled in legal battles over issues of content moderation and data privacy on both sides of the Atlantic.

This collective scrutiny marks the emergence of a new regulatory consensus in Western nations, indicating a shift towards more stringent oversight of the tech industry.

The European Union (EU) has also taken action against TikTok, launching an investigation in February into the platform’s alleged failure to protect minors. This investigation follows a substantial fine of $372 million imposed on TikTok by the EU just six months earlier for similar violations. Under the EU’s new Digital Services Act, TikTok could face penalties of up to $800 million or 6% of its global turnover.

With major elections scheduled in Europe and the United States later this year, consumers should prepare for significant changes in their online experiences, prompting questions about the future of social media platforms.

The renewed focus on tech companies places increased pressure on TikTok and its parent company, Bytedance. Even if the proposed TikTok ban clears the US Senate, Bytedance would have a five-month window to sell TikTok’s US operations before facing more severe measures.

However, such decisions are likely to face legal challenges, as seen with previous attempts by states like Montana to enact bans on TikTok, resulting in disputes over First Amendment rights. The evolving legislative landscape underscores a shift in priorities from protecting freedom of speech to prioritizing user protection, reflecting an international trend towards greater regulatory oversight of online platforms.

The EU’s Digital Services Act represents the latest effort in a series of global regulations aimed at addressing online safety concerns.

These regulations, which place greater responsibility on platforms, signify a departure from earlier internet legislation. In the 1990s, legislation governing online service providers, such as Section 230 in the United States, focused on extending First Amendment protections.

However, recent events, including public outrage over cases like the Molly Russell inquest and US Senate hearings on online child exploitation, have prompted regulators to emphasize online safety and transparency. The shift reflects growing concerns that platforms prioritizing user acquisition over safety can result in significant harm to users.

The adoption of regulatory measures extends beyond the EU, with countries like the United States, Australia, Singapore, South Korea, and several Latin American nations rolling out their own legislation in recent years. This global consensus marks a significant departure from previous approaches and signifies a unified effort to establish stronger regulatory frameworks to protect internet users.

Despite their immense value to local economies, tech giants like Google, Amazon, Meta, Apple, and Microsoft are facing increased regulatory scrutiny. Recent fines imposed on Apple under EU antitrust laws and the ongoing enforcement of GDPR demonstrate the growing willingness of regulators to hold tech companies accountable.

Meanwhile, the European Commission is already tightening the screws o big tech companies including Google, Facebook and TikTok with requests for information on how they’re dealing with risks from generative artificial intelligence, such as the viral spread of deepfakes. To that effect, they have shared has sent questionnaires about the ways that eight platforms and search engines — including Microsoft’s Bing, Instagram, Snapchat, YouTube and X, formerly Twitter — are curbing the risks of generative AI.

European users, constituting a significant portion of social media platforms’ user bases and advertising revenue, are particularly influential in shaping regulatory responses.

The evolving relationship between regulators and tech companies is characterized by complexity and interdependence. While calls for regulatory constraints on Big Tech are growing, unilateral bans remain uncertain. Both parties recognize the importance of collaboration, as highlighted by statements from figures like Elon Musk and Mark Zuckerberg.

For users, changes in online features and services are imminent, with potential shifts towards subscription models aimed at offsetting compliance costs. Such agreements may benefit consumers by promoting digital literacy and safeguarding personal data from exploitation by ad-dependent companies.

As technology and democracy intersect in a pivotal year, the ongoing debate between regulators and online platforms is expected to intensify. Greater legislative clarity and a safer online environment are the desired outcomes of this ongoing struggle, promising a future where user protection and online safety are paramount.

(With inputs from agencies)

Latest article