Big Tech and Media Freedom

Over the years, the indisputable boons of the internet and social media have been welcomed by governments, and the public, across the world. However, of late, apprehensions arising from the increasing negative impacts of social media have resulted in the need for governments to scrutinize and regulate them. Even as the tech platforms are aware of issues pertaining to content moderation, rampant misinformation, and propaganda – all of which are in close proximity to freedom of expression and human rights – they have not yet established themselves to be responsible towards weeding out systemic seeds of hate that damage the virtual and social ecosystem.

The Inception & Evolution of Big Tech

Put simply, the internet revolutionised the world as we knew it. The arrival of the Web brought along promises of democratisation, economic liberalisation, and inclusion. It empowered citizens, and the marginalised, with a stage and a voice. Empires were built, and broken. Somewhere in the middle, almost two decades ago, came along a phenomenon called social media, which took the world by storm. Initially used for networking and trade, the power of social media’s positive impact was laid bare during civil rights movements such as the Arab Spring, a massive wave of protests against the authoritarian establishment in the Middle East, and more recently, #BlackLivesMatter, a movement against judicial injustice against African Americans in the United States. 

Very soon, the promise of social media being an inclusive public sphere and a mediator of change began to fade as the other side of the coin formed the roots of disproportionately malicious problems. The platforms turned into breeding grounds for conspiracy theories and regressive ideas such as hatred, bigotry, misogyny. The monopoly of a handful of technology companies has perpetuated inequality and massive misinformation through their policies and algorithms. The corporate powers also reportedly gave in to manipulation by political forces and vested interests.

The tech companies have replaced the mainstream legacy media in disseminating news, blurring the lines between the roles of tech companies acting as the platform and publisher. Since at present, they are beyond the regulatory scope that extends to media companies, the platforms have an upper hand at influencing politics and cultural fabric of the society. Emily Bell, Director of Tow Center for Digital Journalism, Columbia University, and Emre Kızılkaya, International Press Institute Turkey National Committee Vice Chair, in a recent online discussion, said that in the last few decades journalism had been fundamentally re-shaped by the big digital platforms such as Facebook and Google as traditional media outlets lose advertising revenue to internet giants, who had now taken on the roles of publishers globally.

Also read:

Facebook, Twitter: Platforms or media companies?

Platforms Are Not Publishers

Market Monopoly 

Tech giants such as Google, Facebook, and Twitter, arguably the most powerful entities in the world, are armed with data and have purportedly subverted democratic processes such as elections and demonstrations. Apart from influencing social changes, the companies also have a major hold on the digital economy. In 2016, the United States took an important step towards investigating the highly monopolised Google, a company whose products and services dominate the entire digital space. “As they exist today, Apple, Amazon, Google, and Facebook each possess significant market power over large swaths of our economy. Our investigation leaves no doubt that there is a clear and compelling need for Congress and the antitrust enforcement agencies to take action that restores competition, improves innovation, and safeguards our democracy,” said the Judiciary & Antitrust Committees. Prohibiting platforms from engaging in self-preferencing and eliminating anticompetitive forced arbitration clauses are two of the few recommendations made by the committees. 

On the other hand, Australia and France have taken to undoing the damage done to news organisations by the big tech. Australia has introduced a media legislation that demands platforms to pay the news publisher for using their content. Facebook threatened to block Australian users from sharing news and Google argued that the legislation was “unfair”. However, Bell and Kızılkaya said that the grants would actually be a fraction of what media companies should receive.  

Also read:

Why Google and Facebook are being asked to pay for the news they use – Explainer

Did Big Tech Get Too Big? US Crackdown Seeks Answer – Quicktake (Click to view PDF)

The Flipside of Social Media

With the rise of social media’s power, the scale of misinformation and privacy concerns also went up. What started out as rumours and trolls evolved into disinformation campaigns and targeted political propaganda that spread like wildfire in the vast, crowded social media platforms. The fire hose of fake news has even purportedly led to a genocide in the Southeast Asian country of Myanmar in 2016. 

An article published in the journal Policy & Internet, says, “In relation to privacy, the wide range of issues include expanding surveillance regimes; tracking and profiling of users’ online behavior; and data transfer without the required safeguards.” Companies such as Google and Facebook emphasize in their privacy policies that data are collected in order to provide better services. However, the Cambridge Analytica case brought out the extent to which the platforms stretch the boundaries of data privacy. 

In 2018, it was revealed that Facebook had shared personal details of more than 3 lakh users with political consulting firm Cambridge Analytica without their knowledge. This data was reportedly used to swing voters in US elections and other campaigns, one of which was Donald Trump’s 2016 election campaign. Recently, India’s Central Bureau of Investigation has booked Cambridge Analytica for “illegal harvesting of personal data from Facebook users in India” for commercial purposes. 

The founders of Facebook and Twitter have come under scrutiny for their inaction towards controlling rampant misinformation. The primary criticism against the tech giants also included content moderation and algorithmic bias. In November 2020, Mark Zuckerberg and Jack Dorsey, CEOs of Facebook and Twitter respectively, were ordered to testify before the United States Senate Judiciary Committee on allegations of censorship and “anti-conservative” bias during the 2020 presidential election campaign. Zuckerberg and Dorsey had promised lawmakers that they would aggressively guard their platforms from being manipulated by foreign governments or used to incite violence around the election results. Zuckerberg said that technological safeguards, as well as human monitors, are now catching most hate speech before it is reported to the company, but he acknowledged “there’s still more progress to be made.”

Also read:

What Does It Mean For Social Media Platforms To “Sell” Our Data?

2019 In Focus: The Year Big Tech Tried To Fight User Privacy Concerns But Failed Anyway

Content moderation & Censorship – A Grey Area

According to experts, the act of moderating and censoring content is tinted grey. On the one hand, restrictions imposed by the government on social media platforms or the platforms’ practices, algorithms, and decisions to block certain content, can result in free speech violations. On the other hand, not regulating content can pose problems as big as subversion of elections in democracies. 

For platforms as dense as Facebook and Twitter, the need for content moderation goes beyond the scope of human capabilities. However, technology such as Artificial Intelligence has also fallen short in moderation, particularly with audiovisual content. An article published in First Draft, a global news platform, notes that when Facebook turned to AI as opposed to human content moderators, in the wake of the pandemic, there was a greater spread of child pornography. While Twitter has begun labeling tweets that spread misinformation and those that are “state-affiliated”, the platform still has a long way to go in weeding out hateful and false information.

Moderation by the platforms, which falls inside the realm of “corporate censorship”, is mired under legal conundrums. In the absence of standardised misinformation & censorship laws, the companies’ policies play out differently under different governments. While countries such as China have outrightly banned the tech giants, some countries allege that the platforms happen to favour the left, and some the right.  

Also read:

Digital Platforms: To Regulate or Not To Regulate? (Click to view PDF)

How Does Content Moderation Affect Human Rights? Commentary on the Case of Infowars

European Union’s attempt to rein in social media giants 

The European Union (EU) has proposed two draft laws – Digital Services Act and Digital Markets Act – to upgrade legislature governing digital services in the EU. The laws focus on competition among social media platforms and holding them accountable for the content hosted. Heavy fines, as high as 6% of the revenue of the platforms, upon violation of the rules have also been introduced. Margrethe Vestager, Executive Vice President, EU, wrote, “The Digital Services Act will impose new obligations and responsibilities on all online intermediaries, mainly platforms, with regard to the content they host – wherever they are in the EU.” The Digital Markets Act, she wrote, will “more specifically target the economic behaviour of companies that have become systemically relevant.”

A key part of the legislation is said to address the dominance of big players such as the US-based Google and Facebook. The EU Chief, Ursula von der Leyen, said during a virtual meet of the World Economic Forum, “We want the platforms to be transparent about how their algorithms work because we cannot accept that decisions that have a far-reaching impact on our democracy are taken by computer programs alone.”

Earlier in 2018, the EU had introduced the game-changing General Data Protection and Regulation (GDPR), a law to protect personal data acquired and handled by companies. The GDPR shifted the onus of taking protective measures from the user to the organisation and allowed citizens to request or erase their data anytime. 

Also read:

Tech Giants Risk Breakup Under Strict EU Digital Rules

European Union: Regulating The Internet, At Last? The Digital Markets Act And The Digital Services Act

Previous Dossier:

Pandemic and the Media

(Researched and written by Geetha Srimathi Sreenivasan)

Podcast | Episode 2: Regulation or control?

In November 2020, India brought new regulations for digital news media, social media and OTT platforms. Under the new regulations, digital platforms will come within the purview of the Ministry of Information & Broadcasting (MIB). The shift to MIB from the Ministry of Electronics and Information Technology has raised censorship concerns among the media fraternity. In this episode, media practitioners Sashi Kumar, Founder & Editor-in-Chief, Asiaville News, Dhanya Rajendran, Editor-in-Chief, The News Minute, and Nikhil Pahwa, Founder, Medianama share their thoughts on digital news media regulation.

Click here to listen

The Ministry is also introducing changes to the Cable TV Networks (Regulation) Act, one of which is disallowing state-owned entities to function as network operators.

Read more:

https://www.exchange4media.com/media-others-news/2020-mibs-regulatory-interventions-their-impact-on-media-110102.html

(Compiled by Geetha Srimathi Sreenivasan)