Big Tech’s Dark Side: Killer Apps, Abs, and Acqs
By Sherry Fowler, professor of practice in information technology and business analytics
Is bigger always better? Perhaps not when evaluating industries. Consider the tech industry. Though most people agree that technology has been (and can continue to be) used as a force for good, the largest players in this industry (known as Big Tech) have been under increased scrutiny in recent years due to their size, lack of regulation, and impact on society at large.
Killer Apps
Three of the Big Tech companies (Facebook, Twitter, and Google) have deployed social media apps (e.g., Facebook, Instagram, Twitter, YouTube) whose use has exploded in the last fifteen years. People use these killer (practically indispensable) apps to stay connected to friends and family and to increase social capital. Researchers have found that social capital allows individuals to more easily affect their environment (e.g., the more followers, comments, and interactions based on trust, the more influence) and create potential for advantage from their social network. Individuals also use social media for information control, reciprocity, social affiliation, boldness, investigation, news, or loneliness mitigation.
Enterprise-level users employ social media to promote content, collaborate and share knowledge, increase sales and business visibility, employ target advertising for brand awareness and demand increases, and increase user engagement. Social media is also used as a communications tool for exerting collective or social influence. Social media use allows novel forms of collective engagement, crowdsourcing and crowdfunding, political activism, proactive grassroots activism in driving and sustaining a social movement, and collective crisis communication for societal change. Yet, for all of the possible good uses of social media killer apps, there are also negative influences, known as the dark side of social media. For example, research has shown that envy of others, due to the use of social media, has a negative impact on an individual’s cognitive and affective well-being. Other negative influences of social media include stress and social overload, depression contagion, and antisocial behavior.
Killer Abuses (Abs)
Abuses from social media companies and the users of their apps abound. These abuses include addiction, privacy neglect, the display of inappropriate content to minors, disinformation, censoring and free speech violations, sex trafficking, cyberbullying, and threats of violent crime.
Addiction. Addiction is a common abuse of social media platforms, partly due to algorithms designed by the social media companies to promote usage at the expense of the users, all while pocketing the profits from ads. Movies like The Social Dilemma have brought this issue to the forefront. James Steyer, the editor of the book Which Side of History?: How Technology Is Reshaping Democracy and Our Lives, posits the need to address the “dark patterns” of algorithm usage that drive the addiction models by social media companies.
Lack of Social Privacy. The lack of social privacy has long been a negative issue for tech companies due to the collecting, selling, exploiting, and misusing of consumers’ individual data, as well as the possible violation of civil rights laws. This has raised antitrust and monopoly concerns about big tech companies. White House Deputy Chief of Staff Bruce Reed, who helped write strict privacy legislation for California, states that the privacy issues of Big Tech must be a priority in the U.S., as the industry itself will not self-regulate. He has also argued that Big Tech should be held liable for harmful content displayed and exploited via their apps.
Display of Inappropriate Content to Minors. Social media companies have long been criticized for displaying inappropriate content to children. As authors Bruce Reed and James Steyer put it: “Although the U.S. has protected kids by establishing strict rules and standards on everything from dirty air and unsafe foods to dangerous toys and violence on television, the Internet has almost no rules at all… kids are exposed to all manner of unhealthy content online.” Social media companies must be held accountable when their apps allow children to click a couple of times and have inappropriate content revealed to them.
Disinformation. Another negative aspect of social media is its role as a potential vehicle for disinformation dissemination. This may take various forms such as fake news, social bots, and social engineering. The source of social media content is vital to information quality and trust. Yet, questions arise about how content is deemed as disinformation. Should fact checkers of social media companies be the arbiters of truth? For example, in October 2020, Twitter did not allow users to link or post pictures to a New York Post main story about the possible criminal activities of the son of then presidential candidate Joe Biden and whether the candidate himself knew about them. Twitter also required the New York Post to delete their own tweet about the story, citing that it was questioning whether the information itself and the source was potentially illegitimate. Yet, the company appeared to give no credible basis for citing this claim. Facebook also limited the exposure to the story, claiming its fact-checkers were reviewing the story’s veracity. These actions shielded important information from upcoming voters in the 2020 election. In July 2021, it was revealed that the federal prosecutor heading up the investigation in Biden’s son delayed issuing the search warrants and grand jury subpoenas until after the election.
If there is a lack of accountability from technology companies, there will ultimately be a lack of trust in them by others. In a recent 2021 Gallup poll where respondents were asked about their confidence in various institutions, only 29% stated that they trusted technology companies (down three points from the prior year), only 18% suggested that they had confidence in big businesses, and a mere 12% said that they exhibited trust in Congress.
Even more chilling, consider how trust might be negatively affected when Big Tech and Government collaborate in an endeavor. On July 15, 2021, the U.S. Government announced that the Biden administration had been flagging problematic content related to the coronavirus pandemic for Facebook to censor (delete). However, what is the definition of problematic content? Has it been quantified, or is it merely subjective? Though the government stated that it was getting advice from expert scientists and physicians, what if these experts disagree? What if their opinions become politicized? As of April 19, 2021, Facebook had already removed more than 12 million pieces of content deemed as disinformation. In this case, is Facebook exhibiting behaviors that are more like a governmental actor than a private company? Is the government colluding with a private company it is supposed to be regulating?
Censoring and Free Speech Abuses. A related dark side of social media is the targeted censoring of both content and individuals in an attempt to mold narratives and control sites. For example, Facebook suppressed articles and censored several individuals whose views differed with the company’s position on Covid-19 vaccine hesitancy and on the Wuhan corona-virus lab leak theory. Though Facebook later announced that it would no longer remove posts claiming that Covid-19 was manufactured, no one could sue the company for censoring.
During the first week of July 2021, Facebook became more concerned about extreme content and tested messages sent to some users asking if the users needed counsel related to potential harm due to exposure of extremism. Facebook stated that it would connect users to organizational counselors, who (based on the website of the organizational counselors) seemed to only consider extremism to be a far right problem. Facebook may have succumbed to pressure to make this dangerous move, where friends snitch on friends, especially in light of the lack of definition of the ‘extreme content’ phrase on the site. All of this brings in question whether Facebook is actually principled or merely using this new test as an opportunity to virtue signal or other disingenuous reason.
When private companies engage in censoring actions at the behest of government (as mentioned earlier), they have infringed on the First Amendment, which limits governmental actors at the federal, state, and local levels. American journalist and author Glenn Greenwald states it like this: “while the First Amendment does not apply to voluntary choices made by a private company about what speech to allow or prohibit, it does bar the U.S. Government from coercing or threatening such companies to censor.”
Not only do social media companies censor posts, but they also can censor individuals by removing their accounts. Google, Facebook, and Twitter have banned the former president of the U.S. (Donald Trump) from their platforms. Twitter permanently banned Trump from tweeting for the possibility of his words inciting violence but allowed foreign leaders with much harsher content to stay, including Iran’s Ayatollah Ali Khamenei, whose posts offer denials of the Holocaust and the destruction of Israel. In another example, Facebook softened and downplayed the narrative by Chinese leaders around China’s ruthless treatment of Uyghur Muslims. Cuba’s President of the Council of State, Miguel Díaz-Canel, remains on Twitter even though he called for combat against his own citizens during the uprisings in July 2021. Cubans were protesting the lack of food, medicine, and other necessities and crying out for freedom in the streets after more than 60 years of communist rule. Most recently, Twitter has yet to outline a policy regarding the accounts and posts of Taliban leaders (as the new official state heads in Afghanistan) having access to its platform. The double standard of Big Tech censoring posts of some world leaders (but not all who violate standards) is of concern to other world leaders. Twitter has also banned thousands of other accounts of regular citizens whose content the company deems problematic because it could incite violence or because it is considered disinformation.
YouTube also censors. The American Conservative Union (ACU) stated that YouTube effectively canceled its ability to live stream content from the organization’s “America UnCanceled” Conservative Political Action Conference on July 9-11, 2021. The reason? Because the ACU had posted a video showing the announcement of Donald Trump’s class action lawsuit against Big Tech. The ACU account was frozen for seven days after it aired the Trump lawsuit announcement, according to The Washington Times. ACU Chairman Matt Schlapp believes that this is the latest attempt of “Big Tech censoring content with which they disagree in order to promote the political positions they favor.”
Section 230 of the U.S. 1996 Communications Decency Act states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Dr. David Baumer, attorney and professor emeritus at North Carolina State University, suggests that the law is currently under review but still stands. “It insulates Internet Service Providers (ISPs) from liability from getting sued unless they contribute to the content in a factually incorrect situation,” he says, and “allows for expression of people who write on various topics.” Baumer is generally concerned about cluttering the Internet and leans to the side of those in favor of keeping Section 230 because of undesired additional regulation of the Internet. Yet, Baumer also suggests that ISPs and tech companies should have liability if they cancel people based on their group affiliation (e.g., conservative political group) because they are acting like a publisher. Publishers (e.g., The New York Times or The Atlanta Journal & Constitution) must attempt to fact-check everything they publish so that slanderous or defamatory articles about someone does not lead to them being sued. Conversely, platforms (e.g., Verizon, AT&T, or Comcast) act more like conduits that allow all paying customers to use their services and will not remove a customer regardless of personal speech or views.
Though originally intended to encourage companies to remove illegal or dangerous user content, the lasting effect of Section 230 is to allow Internet-based tech companies and ISPs to display and promote user-generated content without being held responsible for whatever their users might create. In effect, it allows the social media companies to censor with immunity. Candueb and Epstein have said that social media companies cannot “claim that Section 230 immunity is necessary to protect free speech, while they shape, control, and censor the speech on their platforms.” Zilles stated that if social media companies “are making decisions about censoring certain types of content, banning certain individuals from their platforms, or somehow throttling or blocking certain types of content by using algorithms, then they are no different from traditional publishers (who edit content) like newspapers or magazines.”
So, are large social media companies just intermediaries (platforms/conduits), or are they also publishers? If publishers, they should be held to the same standard as traditional publishers who are not afforded the protection of Section 230. Social media companies should be responsible for content that generates revenue or content that is censored or blocked by a human or algorithm. Many agree that the companies act as publishers instead of mere platforms, and Section 230 (also known as the Safe Harbor Rule of the Internet) has become an unpopular and comprehensive liability shield for these companies.
If the Internet is the digital equivalent of the town square, shouldn’t everyone be allowed to lawfully speak there? Alan Dershowitz, professor emeritus for Harvard Law School, suggests that the U.S. Congress “could and should limit the exemptions only to media platforms that do not censor lawful speech that they deem offensive.” When private social media companies use the First Amendment as a shield and a sword selectively to censor free speech, and because they represent a small number of “powerful” private companies, they essentially shut down the marketplace of ideas. Ironically, the First Amendment was designed to keep the marketplace of ideas open and entertain all views.
Sex Trafficking. Another dark side of Big Tech is related to sex trafficking. Tech companies have a responsibility to remove content violating federal criminal laws. However, a few years ago, social media platforms came under fire for illegal sex trafficking online. To address this issue, in 2018, the U.S. House bill FOSTA (Fight Online Sex Trafficking Act) and the Senate bill SESTA (Stop Enabling Sex Traffickers Act) were signed into law. Both laws were considered a victory of sex trafficking victims (particularly minors) and their families. FOSTA-SESTA opened an exception to Section 230 such that website publishers would be responsible if third parties were found to be posting ads for prostitution on their platforms.” This exception also had ripple effects for other companies. For example, it caused Craigslist to remove their Personals section.
In late June 2021, the Texas Supreme Court ruled that a lawsuit against Facebook for sex trafficking may proceed in state district court. The court upheld a Texas statute that makes a party liable “who intentionally or knowingly benefits from participating in a venture that traffics another person,” and found that Section 230 does not protect Facebook from liability for intentional participation in human trafficking. Bruce Reed, White House Deputy Chief of Staff, commented about it this way: “A social media platform like Facebook isn’t some Good Samaritan who stumbled onto a victim in distress: It created the scene that made the crime possible, developed the analytics to prevent or predict it, tracked both perpetrator and victim and made a handsome profit by targeting ads to all concerned, including the hordes who came by just to see the spectacle.” Reed goes on to advocate for the “polluter pays” principle used to mitigate environmental damage, suggesting that this principle can help achieve the same online.
Cyberbullying and Violent Crime Threats. Cyberbullying is another cruelty on social media. The inclination to bully, presence of suitable targets, and absence of capable guardianships create crime opportunities. Social media accessibility results in favorable environmental conditions for cyberbullying, which, in turn, promote the act of social media bullying itself. Yet, unlike specific crackdowns on content that social media companies say could cause offline violence by specific world leaders, these same companies seem to have no problem allowing threats and cyberbullying by violent street gangs in America and other parts of the world. Both Facebook and Twitter have allowed posts associated with violent crime threats to go unchecked for years. YouTube has allowed the live streaming and videos of youth murders from gangs and gang recruitment videos without repercussions. Since 2007, YouTube has rejected calls to report such videos to the police. Instead, the company rakes in dollars from each view. Ironically, Information Systems and Criminology researchers could not collect research data from social media sites if the companies did a better job policing its content.
In early February 2021, the “SAFE TECH Act” was introduced by several U.S. senators to amend Section 230, allowing individuals to sue social media and other websites for content on them if posts allow cyber-stalking or are abusive, discriminatory, or harassing. It ensures that Section 230 protection does not “apply to ads or other paid content” and allows “the family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life.” Some disagree with the act, stating that it would have similar results as fully repealing the law, and the language about payment may cause web hosts to delete any controversial speech. Perhaps now is also the time for a more targeted bill, similar to FOSTA or SESTA, to mitigate gang-affiliated violence and threats appearing on social media sites, potentially known as GANGSTA (the Gang-Affiliated Narcotics and Gun-Violence SToppage Act).
Killer Acquisitions (Acqs)
There is another dark side of big tech companies, resulting from their size and power. Big tech companies have lots of money and have grown by finalizing acquisitions (“acqs”) of rival companies and gobbling up innovative startups to take them out the market. For example, if you use a Google product, chances are it originated from one of the hundreds of Google acquisitions. These include Google Groups (from Usenet’s Deja News), Google Latitude (from Dodgeball), Google Voice (from GrandCentral), Google Sites (from JotSpot), GPS navigation software (from Waze), and YouTube Next Lab (from Next New Networks). Other acquisitions include Android, YouTube, Doubleclick, Looker, and Spotify.
Facebook has acquired WhatsApp, Oculus VR, Instagram, CRTL-labs, and others. Ro Khanna, a Congressman who represents a portion of Silicon Valley in California, has called for more aggressive antitrust enforcement and privacy regulations. He suggests that Facebook’s takeover of Instagram should have never been approved, and these should be separate platforms. Several federal government officials and attorneys general agree that Facebook should be broken up, as it is engaged in monopolistic-like behavior. In 2020, the Federal Trade Commission and 46 states sued Facebook on antitrust grounds, suggesting that the company’s acquisitions of Instagram and WhatsApp turned the company into a monopoly. However, a federal judge in 2021 threw out the case, citing insufficient legal evidence. Twitter has gobbled up Magic Pony Technology for machine-learning improvements, Periscope, and TellApart. Overall, the acquisitions in the Big Tech industry have led to a decline in innovation and “created significant problems,” White House chief economic advisor Brian Deese said, including the lack of new market entrants and a possible decrease in wages.
Potential Solutions
There have been a few additional recent legal maneuvers to address the power of Big Tech that, if successful, could be part of the solution. These include ongoing lawsuits, new bills, recent executive orders, and Supreme Court statements.
- Lawsuits: In October 2020, the U.S. Justice Department filed a long-expected antitrust lawsuit against Google (a subsidiary of Alphabet), suggesting that the company employs anticompetitive tactics. Additionally, former President Donald Trump filed class action law suits against Google, Facebook, and Twitter (and their CEOs) on July 7, 2021 for censorship based on political affiliation.
- Bills. In June 2021, David Cicilline introduced a bill that would “prohibit the tech behemoths from acquiring promising startups that could later become potential rivals and forbid them from using their platforms to discriminate against competitors. It would also prevent the companies from favoring their own products over competitors using their services.” In July 2021, John Kennedy introduced the “Don’t Push My Buttons Act,” legislation which would eliminate Section 230 protections to companies that use algorithms to increase engagement by dwelling excessively on issues that correspond to users’ emotions, preferences, habits, or beliefs. In late July, Reps. Cathy McMorris Rodgers and Jim Jordan drafted reform legislation to modify the Communications Decency Act that would require Big Tech companies to report any content moderation decisions to the Federal Trade Commission and to “implement and maintain reasonable and user-friendly appeals processes for decisions about content on the platforms.” Further, it would place them under a proposed “Section 230A,” which would hold the Big Tech giants accountable for content they “amplify, promote or suggest.”
- Executive Orders. On July 9, 2021 U.S. President Biden issued an executive order asking the FTC to create new rules on the data collection and user surveillance practices of tech companies. Yet, in a stunning reversal of stated policy, the White House Press Secretary admitted publically in mid-July that the administration is currently colluding with one tech company, Facebook, to do just that – collect data and surveil users to remove ‘misinformation’.
- Supreme Court Justice Statements. Supreme Court Justice Clarence Thomas stated that, based on the First Amendment, future courts could uphold a statute that considered social media apps as common carriers (public accommodation places) and thus, restrict their removing content or accounts on their apps based on political point of view.
There are other innovative ideas posited to help address the dark side of technology which fall outside of legal actions. For example, on July 7, 2021 conservative members of the Judiciary Committee revealed a new framework to include accountability, transparency, and anti-trust strength to check Big Tech. Senator Mark Warner posits three ideas to increase competition in this industry:
- Ability for consumers to move from one social media platform to another easily, taking their data with them, while still being able to talk to others on another platform (also known as data interoperability). The 2019 bipartisan Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act by Mark Warner and Josh Hawley addresses this issue;
- Ability for consumers to actually decline something instead of only seeing the Yes or Learn More buttons; and
- Ability for consumers to know what their personal data is worth, as social media companies monetize personal data.
Conclusion
Are social media companies too big for our own good? Are innovative new entrants impeded in the Big Tech industry due to the sheer size of the tech giants? Are the practices of social media giants anti-competitive? These companies with killer apps cannot continue the killer abuse and killer acquisitions without recourse.
The untenable status quo is not an option, even as we grapple with the best approaches to mitigate the power of the social media platform “tres-opoly” of Facebook, Twitter, and Google. It is time to decrease the power of Big Tech. Both Democrats and Republicans agree that we need to either overhaul or remove Section 230 (albeit for different reasons), while possibly keeping protections for startups. Social media should be open for multiple views and discussion, not closed off to only one perspective.
This move is important to tame the Wild West of social media by insisting that social media companies become better, not bigger. Not only for us, but also for the generations to follow.
This post was originally published in Poole Thought Leadership.
- Categories: