Less than a week before the election, the CEOs of Twitter, Facebook and Google are set to face a grilling tomorrow by Republican senators who accuse the tech giants of anti-conservative bias.
Twitter CEO Jack Dorsey, Facebook’s Mark Zuckerberg and Google’s Sundar Pichai agreed to testify remotely during a hearing of The Senate Commerce Committee after being threatened with subpoenas.
Beyond questioning the CEOs, senators will examine proposals to revise Section 230 – the long-held legal protections for online speech – an immunity that critics in both parties say enables the companies to abdicate their responsibility to moderate content, exposing their biases, which has become increasingly transparent as the Hunter Biden laptop stories are reported (and censored)
Facebook’s Mark Zuckerberg takes a more measured tone in his testimony (excerpt, emphasis ours):
…the debate about Section 230 shows that people of all political persuasions are unhappy with the status quo. People want to know that companies are taking responsibility for combatting harmful content—especially illegal activity—on their platforms. They want to know that when platforms remove content, they are doing so fairly and transparently. And they want to make sure that platforms are held accountable.
Section 230 made it possible for every major internet service to be built and ensured important values like free expression and openness were part of how platforms operate. Changing it is a significant decision. However, I believe Congress should update the law to make sure it’s working as intended. We support the ideas around transparency and industry collaboration that are being discussed in some of the current bipartisan proposals, and I look forward to a meaningful dialogue about how we might update the law to deal with the problems we face today.
At Facebook, we don’t think tech companies should be making so many decisions about these important issues alone. I believe we need a more active role for governments and regulators, which is why in March last year I called for regulation on harmful content, privacy, elections, and data portability. We stand ready to work with Congress on what regulation could look like in these areas. By updating the rules for the internet, we can preserve what’s best about it—the freedom for people to express themselves and for entrepreneurs to build new things—while also protecting society from broader harms. I would encourage this Committee and other stakeholders to make sure that any changes do not have unintended consequences that stifle expression or impede innovation.
Twitter’s Jack Dorsey is more aggressive, warning ironically that “Section 230 is the Internet’s most important law for free speech and safety. Weakening Section 230 protections will remove critical speech from the Internet.” – (excerpt, emphasis ours)
Twitter’s purpose is to serve the public conversation. People from around the world come together on Twitter in an open and free exchange of ideas. We want to make sure conversations on Twitter are healthy and that people feel safe to express their points of view. We do our work recognizing that free speech and safety are interconnected and can sometimes be at odds. We must ensure that all voices can be heard, and we continue to make improvements to our service so that everyone feels safe participating in the public conversation—whether they are speaking or simply listening. The protections offered by Section 230 help us achieve this important objective.
As we consider developing new legislative frameworks, or committing to self-regulation models for content moderation, we should remember that Section 230 has enabled new companies—small ones seeded with an idea—to build and compete with established companies globally. Eroding the foundation of Section 230 could collapse how we communicate on the Internet, leaving only a small number of giant and well-funded technology companies.
We should also be mindful that undermining Section 230 will result in far more removal of online speech and impose severe limitations on our collective ability to address harmful content and protect people online. I do not think anyone in this room or the American people want less free speech or more abuse and harassment online. Instead, what I hear from people is that they want to be able to trust the services they are using.
I want to focus on solving the problem of how services like Twitter earn trust. And I also want to discuss how we ensure more choice in the market if we do not. During my testimony, I want to share our approach to earn trust with people who use Twitter. We believe these principles can be applied broadly to our industry and build upon the foundational framework of Section 230 for how to moderate content online. We seek to earn trust in four critical ways: (I) transparency. (2) fair processes. (3) empowering algorithmic choice, and (4) protecting the privacy of the people who use our service. My testimony today will explain our approach to these principles.
As AP reports, Trump signed an executive order this year challenging the protections from lawsuits under a 1996 telecommunications law. A provision known as Section 230 has served as the foundation for unfettered speech on the internet.
“For too long, social media platforms have hidden behind Section 230 protections to censor content that deviates from their beliefs,” Sen. Roger Wicker, R-Miss., the committee chairman, said recently.
The head of the Federal Communications Commission, an independent agency, recently announced plans to reexamine the legal protections, potentially putting meat on the bones of Trump’s order by opening the way to new rules. The move by FCC Chairman Ajit Pai, a Trump appointee, marked an about-face from the agency’s previous position.
The unwelcome attention to the three companies piles onto the anxieties in the tech industry, which also faces scrutiny from the Justice Department, federal regulators, Congress and state attorneys general around the country.