It's time to ban news-choosing algorithms

Rise of the machines? Turns out "Skynet" isn't controlling killer robots. It's controlling us.

Multiple-exposure silhouettes with data overlay of scientific algorithms and formulas.
Metamorworks / Getty Images

I've got a controversial proposal for you. You might agree or disagree. But I'm sure we can all agree with the fact that disinformation on social media is a serious problem. It's harming health during the pandemic. It's threatening democracies around the world. And it's causing social division, spreading extremism, and creating mistrust in science, knowledge and expertise generally.

We're all grappling with online disinformation now -- governments, politicians, citizens, businesses, executives -- it's everybody's problem.

But not everybody understands the problem. In fact, hardly anybody does.

Why content algorithms exist

Every social network, including Facebook, Twitter, Instagram, Snapchat, Pinterest and others, create and refine software that picks what you see and in what order you see it.

The companies that run the social networks really, really want to be in control of what you see. But why?

Like all businesses, social networks compete in a life-and-death contest for primacy against the competition. To succeed in this business is to survive. To fail is to fade away and die -- or be crushed or swallowed by the winners.

What's different is that, for social sites, "the competition" is: every human activity that requires attention. Work is the competition. Talking to your spouse is the competition. And watching TV, playing video games, and going out to dinner are the competition.

Algorithmic content choosing is the flip side of the privacy invasion coin. Social networks want to know everything about you and what your preferences are so they can choose content that grabs your attention and helps them win the attention contest. Also: knowing all about you helps them serve ads that you'll respond to.

Based on the deep knowledge about you, your preferences and your personality, content algorithms are designed to "push your buttons" -- to get you hooked on the site so you'll stay. And they work.

Social algorithms are evolving. Every day, they get better at capturing and holding the attention of users. The time spent on social always rises. In the decade between 2012 and 2022, the amount of time people spend on social is expected to double, and just keep on rising. (Today, people spend an average of 2.5 hours per day on social. This varies by country: in the Philippines and Nigeria, people spend nearly four years a day on average on social networks.

Social media algorithms evolve and become better every day. But better at what, exactly?

What they're really getting better at is making their product more addictive. Facebook's News Feed algorithms favor addictive or compulsive scrolling, clicking, commenting, sharing and following. They have no interest in presenting a coherent, accurate world view to users.

And that's the problem. News-choosing algorithms have a powerful influence on the human mind, and they exist to serve the business goals of the social network and nothing else.

What's wrong with news-choosing algorithms

Algorithms are software machines that, at scale, tell humans what to think -- or, at least, give us perspectives that shape our view of the world. Algorithms choose which dozen or two of the millions of news articles and blog posts published each day that you will see.

Very roughly .001% of the articles published each day are presented to you, and the rest are hidden from you.

The four fundamental problems with content algorithms:

  1. Algorithms will continue to evolve to become more addictive. If they're addictive like sugar today, they'll be addictive like crack cocaine tomorrow.
  2. They're self-reinforcing. They find a minor interest and turn it into a major interest, eventually leading you to a tribe or community from which you may come to derive your identity as a person.
  3. They can be "gamed" or manipulated. Everyone from state propaganda operators to cult leaders to social media influencers can and will figure out how to trigger virtual success for their posts. The information landscape favors algorithm hackers.
  4. They're secret and personalized, leading to a dangerous lack of transparency. The code that strongly shapes our knowledge of the world is hidden and changes in ways that are largely unknowable.

Why content algorithms love weaponized disinformation

To say that social media divides societies is something of a cliche. But the reasons why it divides is poorly understood. It's not about this fact or that fact. It's about "priming."

Algorithmically amplified disinformation not only presents social media users with disinformation, but "primes" them for believing falsehoods even without further disinformation. In other words, it's not the mere presentation of isolated information that's false. The damage is the construction, over time, of an understanding of the world that makes belief in factual information nearly impossible.

For example, when the Covid-19 pandemic emerged earlier this year, millions of social media users were already strongly predisposed to believe the coronavirus was a man-made or deliberately spread virus. In fact, many users were so "primed" with a distorted world view that accurate news about the virus would lead many to assume conspiracy without any additional falsehoods. Many believed instinctively that the coronavirus was created by Bill Gates to force a vaccine on the public, to cover for the damage of 5G "radiation," actually spread by 5G towers, a Chinese plot to harm the West, a CIA plot to harm China, a Jewish plot to harm Muslims, a Muslim plot to harm Hindus, a leftist plot to harm the president's approval ratings and many others.

A similar "priming" occurs around elections where millions of voters in various countries are already so deep down their social media disinformation rabbit holes that by the time the election-specific disinformation hits they'll believe the most outlandish, evidence-free conspiracy theories and falsehoods.

In other words, algorithmically amplified disinformation doesn't merely expose the public to false information. It changes their world view and "primes" them to think about topics in a specific way. It doesn't just change what we know; it changes how we think.

Another example is conspiracy theories around coronavirus cures. They work because the public has been "primed" to assume that all official information is compromised or dangerous. So preconditioned victims of this process might reject mask wearing but embrace the taking of colloidal silver. Such an inclination is not based on the facts around masks or silver, but instead on a world view about the reliability of different information sources.

What to do about the "infodemic"

A report in Buzzfeed claims that "disinformation and its fallout have defined 2020, the year of the infodemic" -- a problem so enormous that it "broke the US."

Sounds bad. And it is. Something has to be done. But what?

The default, knee-jerk solution is to play Whack-a-Mole with specific sources or topics, either by social network internal policy or legislation.

YouTube this week announced that, a month after the election is over, they intend to start removing new election-related conspiracy theories (old, previously posted conspiracy videos will remain). Their timing appears to signal that they didn't want to influence the election with YouTube policy; they wanted conspiracy theorists to influence the election instead.

Twitter says it flagged 300,000 tweets for election disinformation. Twitter's "solution" so far is to label disputed information in tweets about "civic integrity, COVID-19, and synthetic and manipulated media." There's no evidence that these labels have any effect. A similar warning when users retweet disputed content reduced quote retweets by 29%, which is touted by Twitter as progress.

Twitter also says vaguely that misleading tweets under these narrow categories are "de-amplified through our own recommendation systems."

The social network is taking special action against misleading tweets from "US political figures... accounts with more than 100,000 followers, or that obtain significant engagement." In addition to warnings, such tweets "won’t be algorithmically recommended by Twitter." Twitter went on to say that "We expect this will further reduce the visibility of misleading information."

That's a stark and rare admission about the damage caused by "algorithmically recommended" misinformation.

The problem is that this new rule is applied for the same reason as the algorithms themselves -- to benefit the social network. They're not applied based on the need to curb disinformation; they're applied to a narrow range of subjects and on a narrow range of users to curb bad PR for Twitter.

In fact, content algorithms will always exist exclusively to benefit the companies that created them. Why would a company act against its own interests?

Another approach is through legislation. The European instinct is to sanction creators of online disinformation, including state actors. The European Commission said this week it would explore how to “impose costs on the perpetrators” of online disinformation, and also make social networks pledge to remove fake accounts and require transparency in political advertising.

This will fail because it's too easy for state sponsors of disinformation to hide or misdirect the source. And asking social networks to take pledges won't really change their behavior. The EU proposal is far too weak to have any noticeable effect.

The best remedy: Don't let machines choose content

In the future, content-selecting algorithms will be self-learning artificial intelligence programs. Machines will literally decide what humans think.

Whom does this benefit? Why should we be forced to accept this? Why do we assume social networks have an inalienable right to use AI to serve up ever more addictive and distracting content?

We don't let casinos set up gaming tables in kindergartens. We don't allow drug dealers to set up heroin kiosks at colleges. We don't let tobacco companies send free monthly cigarettes to every home in America. The reason is that these addictive vices would cause a lot more harm to the public if we allowed these kinds of businesses to have this kind of access.

So why do we allow social media companies to peddle their addictive products in a space where people are spending hours of time each day -- especially now that we know how socially, politically, and medically catastrophic they are.

Social networks don't have the "right" to use content-choosing algorithms. And we should not allow it.

Social networks should work like this: I follow people or businesses, and I get in my feed the content they post in reverse-chronological order. No monkey business. No inserting gateway-drug content designed to lead me down a rabbit hole of extremism or conspiracy theory.

It's time to stop using machines to decide what people know and how they think. Let's ban news-choosing algorithms.