The era of unregulated social media has ended

2021-10-12

In recent years, social media platforms and big tech companies saw considerable changes to experts’ and academics’ approaches to how digital platforms should and should not operate. This experiment in deregulation had a chance to prove itself by being the default mode since the rise of the internet in the late 1990s, but the results speak for themselves. Now various countries, including Germany, Turkey, and Russia, have already adopted social media regulations, but there is more to come. Political Capital, in the framework of a joint project with the Heinrich Böll Foundation, has launched a substantive debate on the regulation of social media, regarding its legal and ethical implications, the spread of disinformation, radicalization, and its impact on democracy. In the framework of the project, we aim to explore the current debate on social media and big tech regulation so citizens can make informed decisions when using online services. As the next part of this debate, Political Capital interviewed Mark MacCarthy, a Nonresident Senior Fellow in Governance Studies at the Center for Technology Innovation at Brookings, and an adjunct professor at Georgetown University.

 

In a previous article you criticized the Facebook Oversight Board. How should such an organization function in your opinion? Do social media platforms need to be regulated by the authorities or self-regulation can be sufficient? 

The key is creating a new regulatory agency with authority over Facebook and other social media companies. The agency should be responsible for promoting competition, protecting privacy, and overseeing content moderation on social media platforms.  The United Kingdom is moving in this direction with its Online Safety Act, which would assign responsibility for supervising social media platforms to the existing media regulator, Ofcom.  In the United States, legislation sponsored by Senators Brian Schatz and John Thune would assign responsibility to oversee social media content moderation to the Federal Trade Commission, which has responsibilities for promoting competition and consumer protection.

In my view, the social media regulator should enforce transparency measures that provide users with more information about the standards used in content moderation, inform them when they are in violation of one of the standards and provide an opportunity to appeal a content moderation decision.  The transparency rules should also provide an opportunity for users and the public to complain about material that violates the platform content rules and to appeal a decision not to take action against the allegedly violative material.  Appeals should go first to an internal review board and then to an external review board maintained and operated by an industry-led self-regulatory organization modeled after the Financial Industry Regulatory Authority, which oversees the broker-dealer industry under the supervision of the U.S. Securities and Exchange Commission.

The social media regulator should not be allowed to second-guess the content moderation decisions of social media companies and it should have no role in determining the content standards the companies use.  The social media self-regulatory organization would be authorized, however, to play both roles.  This is needed to prevent a government regulator from imposing a partisan political perspective on the content moderation decisions of social media companies. In addition, the social media regulator should have access to the data and algorithms social media companies use to personalize content and should be able to make such data available to vetted researchers subjected to needed confidentiality arrangements to protect user privacy and propriety information. Separately, the U.S. should modify its Section 230 rule to put in place a notice liability regime modeled on the one developed in Europe and currently in place in the U.S. under the Digital Millennium Copyright Act. 

 

The debate on the costs and benefits of regulation has been going on for some time, and the positions are not really converging. How do you see the debate shifting in the next year or so, and to what effect?

I think there is considerable convergence in the U.S. on the need for more transparency and  in other jurisdictions, including the United Kingdom and the European Union, Australia, Canada, and Ireland.  In the U.S., this convergence is bipartisan, supported by both Democrats and Republicans. This consensus arises from the enormous amount of misinformation, disinformation and hate speech on social media platforms, and the sense that the social media companies will not do enough on their own to improve things. I think the chances are quite good for legislation such as the Schatz-Thune bill mentioned above to move through Congress in the next year.  

 

How can users benefit from the evolving regulatory environment? What can they lose because of regulation?

Transparency would benefit users by giving them more ability to confront social media companies on the quality of their content moderation decisions.  The greater transparency would also put public pressure on the companies to do a better job at content moderation by putting more resources into the work, refining their standards, and improving the quality of their enforcement actions. There is very little downside to transparency.  One danger is that it might not be enough to make a real difference. But it would allow regulators and policymakers to gather more information that might inform what other steps, such as regulation of algorithmic amplification, might be needed to control the harmful material online.

 

Why are states only now started to think about regulating digital platforms when they – Google, Facebook etc. - have been with us for the last 15-20 years?

For the last 40 years or so in the U.S., there has been a widespread view that government regulation does more harm than good. Liberals and conservatives both thought that an unregulated internet would be a very good thing and believed only authoritarian governments would attempt to regulate it.  This experiment in deregulation had a chance to prove itself by being the default mode since the rise of the internet in the late 1990s.

The results speak for themselves:

  • concentration in all the core lines of business on the internet, such as the infrastructure for search engines, social media, e-commerce and mobile applications;
  • the accumulation of vast databases of personal information that threaten user privacy;
  • and the astonishingly widespread misinformation, disinformation and hate speech that threatens the ability of countries to govern themselves and take needed steps to protect people from public safety and health threats, including public health measures needed in the fight against COVID 19.  

The widespread and growing concern over these competition, privacy and speech issues caused policymakers of both political parties to reassess their former belief that regulation was harmful and to begin the process of throwing a regulatory net around the digital companies that dominate their lines of business.

 

How do you assess the Australian and French regulatory practices regarding online platforms?

Australia has taken a number of positive steps. Its mandated negotiation between publishers and Google and Facebook produced agreements to share revenue, but more needs to be done to fund public interest news beyond the protection of traditional publishers. Its investigations into the ad tech space are first rate and its call for special rules to govern ad tech companies, including Google, is exactly right. I worry that it is giving too much weight to the need to promote ad tech competitors in a way that might compromise the privacy interests of Australian users and therefore violate the existing Australian privacy laws. Its recent Online Safety Law revises its system of removal notices and extends it to app distributors and search engines, but its new basis for removal, abuse of adults, is too vague and might sweep in some acceptable material. France is also doing well.  Its high court struck down an attempt to mirror the German NetzDG law which gave social media companies a short deadline to remove illegal material upon notification. France is now considering a scaled-back transparency measure and might swing behind the EU’s Digital Services Act as a replacement for its own failed law. 

 

What changes do you expect to see regarding social media regulation in the near future in Europe? Can the planned Digital Services Act be successful?

The Digital Services Act is the key measure and is likely to move forward in a modified form in the next year or so.  Its transparency measures are excellent and will make a substantial improvement in the practices of social media companies. Its improved system of notice liability for illegal material is just the right thing since it requires companies to act expeditiously upon receiving a properly constituted notice of illegality or lose their immunity against liability for the illegality.

It provides government regulators with a little bit too much authority to nudge social media companies to act against harmful content that does not violate the law.  This comes close, in my view, toward replacing the judgment of social media companies with the judgment of regulators, and in effect, erasing the distinction between illegal and harmful, but legal material. I suspect it will be scaled back in the final version. The UK online safety act is also very good on transparency, but it too skirts with giving its regulator, Ofcom, too much authority to police harmful but legal content.

 

What can we expect from the US-EU Trade and Technology Council’s (TTC) first meeting at the end of September? Can the West work together to form a “technology alliance” or is the gap left by the Trump presidency too much to handle?

The first meeting took place in September and the readout is positive in the area of Artificial Intelligence and supply chain regulation.  We didn’t see much movement in the larger area of convergence on platform regulation.  I expect to see that issue be taken up in the next meeting in 2022.  I think the issue is less whether the U.S. and Europe can form a technology alliance against the rest of the world and more whether they can find a sensible shared vision on how to regulate the tech companies that dominate the markets in both Europe and the U.S. 

 

Is Europe still the world’s trendsetter in tech regulation after China’s crackdown in its tech sector?

Europe leveraged its access to its own market to move the rest of the world toward its perspective on data protection rules through its General Data Protection Regulation and its predecessor the 1995 Data Protection Directive. To have easy access to data concerning European citizens a country had to show that its data protection system was essentially equivalent to the European system. The "Brussels effect” is also present in its attempt to regulate Artificial Intelligence, since companies that conduct the needed pre-market and ongoing risk assessments for high-risk applications will do it for applications used throughout the world not simply in Europe.

As to China, so far China’s attempt to regulate its own tech companies is confined to domestic companies.  Its system has always allowed government agencies to play a larger role in content decisions for online companies than is typical in the U.S. and its allies.  But it has largely left the sector unregulated outside of the content area.  That is changing rapidly as it takes antitrust and data protection measures against its dominant tech companies.  But it is not inventing totally new standards.  Its recently adopted data protection law is almost a mirror image of Europe’s GDPR, its proposed regulations of algorithms reflect those proposed in the U.S. and Europe, and the antitrust charges against its tech companies are similar to many of the complaints made in Europe and the U.S. against Google, Facebook, Amazon, and Apple. 

 

What differences do you see between democratic and authoritarian regulatory practices? How can backsliding democracies exploit it?

The key difference between countries like China, Russia, Saudi Arabia and so on and the U.S. and its allies is a greater willingness to use government agencies to set and enforce content standards for online companies. It seems to me that on that issue the two groups of countries are coming closer together, with Europe especially demonstrating more willingness to regulate online content.  The terrorist material directive the European Commission passed last year is a good example of this, as is the NetzDG law in Germany, and the parts of the Digital Services Act that allows regulators to push to address harmful but legal content.

 

Who should be setting the global tech rules: the US, the EU or China?

The U.S., the EU and China are all wrestling with the same problem of how to get a better handle on the activities of digital companies that dominate their lines of business and are using that dominance to harm competition, privacy, and good content moderation. China has a more effective system of information control, one that the U.S. will not be inclined to imitate directly, while Europe might to some degree. On competition and privacy, the U.S., EU and China have a lot to learn from each other.  China has imitated GDPR as have many other countries seeking to upgrade their privacy regimes.  The U.S. is looking to do the same in passing comprehensive national privacy legislation.  Both the U.S. and the EU are looking to modernize their competition laws and might learn something from how the attempt to inject competition in China works out over the next year or so. 

 

Bio

Mark MacCarthy is a Nonresident Senior Fellow in Governance Studies at the Center for Technology Innovation at Brookings. He is also adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department.  He teaches courses in the governance of emerging technology, AI ethics, privacy, competition policy for tech and the ethics of speech.  He is also a Nonresident Senior Fellow in the Institute for Technology Law and Policy at Georgetown Law.