I share Nicholas Carr’s concern that social media will remain a profoundly detrimental influence on American life until we transform the existing legal landscape. But I part ways with him on the specific prescription. Carr sees promise in the Federal Communications Commission regulating social media companies like it has regulated radio and TV broadcasts. But the problem is not an absence of FCC control. It is the hyper-concentrated control of social media in just a few hands.
Carr makes an important conceptual contribution to the debate. He notes that communications companies were historically either conversation-based (telegraphs, phones, and the like) or broadcast-based (network news and newspapers) but that social media companies today blend these roles to an unprecedented degree. He argues that the law should distinguish between these two forms of communication and regulate accordingly. First, for private, conversational speech, platforms should behave as common carriers: simply distribute exchanges without censorship while maintaining strict privacy, meaning no collecting data to monetize those exchanges. Second, the FCC or a similar agency should directly regulate public-facing broadcasting in accordance with the “spirit of the common good” or “public interest” — just as radio airwaves have long been regulated. I wholeheartedly agree with Carr’s first point. That’s why I’ve introduced legislation to end the data-hungry and anti-privacy measures put in place by dominant social media companies.
But I cannot get on board with Carr’s second point. Giving executive officials authority to regulate broadcast media under a vague “public interest” standard has traditionally been justified legally by appealing to inherent natural limitations, such as the scarcity of broadcast spectrum. But as Carr seems to acknowledge, it is not clear that similar constraints legally justify transposing that standard into the digital arena.
Even assuming we could transpose that standard, I would not. To call for regulating broadcaster-users in the “public interest,” as Carr does, is to assume there’s an agreed-upon set of institutions and experts that could identify the public interest and arbitrate the inevitable controversies that would arise. There isn’t. We have, for example, prominent voices in this country who increasingly argue that the “public interest” requires labeling dissent from approved narratives as “misinformation” that must be suppressed. And just this past year, many organizations and former FCC commissioners argued before the Supreme Court that the “public interest” demands race-based regulations. There is some truth underlying Carr’s proposal. Much blame for today’s situation can be laid at the hands of Congress, which for too long took a laissez-faire approach to social media. But the solution is not to police individual “broadcasters” on social media platforms.
Rather, the solution must reckon with the fact that social media is administered by a handful of dominant, hyper-concentrated players, with more-or-less overt ideological commitments of their own. The place for public oversight is to ensure that the few people who control these dominant firms allow a wide range of voices to be heard. Carr presents dynamism as a feature — the selling point — of the public-interest standard. But real competition in the social media marketplace — which we have not had in a long time — can be just as dynamic.
To start, large social media companies should be required to become interoperable with one another: Just as you can email someone who uses a different email provider than your own, you should be able to contact and engage with individuals across different social media platforms. In the same vein, large social media companies should be required to permit the use of alternate filtering and sorting algorithms — democratizing content moderation by allowing users to choose which content they wish to view or block, rather than relying on the black-box internal processes of an individual, hyper-concentrated company.
Further, Section 230 should be amended to allow dominant companies to be held liable for civil damages when they enforce their own terms of service unfairly or when their products, including algorithms, prove harmful. Section 230 has been interpreted wrongly to provide social media companies a nearly impregnable shield from being held accountable. The prospect of fair accountability is a powerful spur to internal reform, and it impedes attempts by companies to become monopolies.
Lastly, federal antitrust laws must be reformed to prevent the biggest companies from strangling the market to avoid possible competition. Offering consumers meaningful alternatives — that don’t end up getting absorbed into established market players — will prove an enduring check on the power of any particular firm.
Carr’s and my proposals would both lead to a world in which social media companies are held far more responsible for their actions in public life than they are at present. But I favor giving the American people the legal tools they need to ensure that social media serves the “public interest,” rather than trusting government regulators or individual firms to deliver on that goal through vague legal standards.
Big Tech Should Answer to the Public, Not to Speech Regulators