AI will understand the world (so you don't have to!)

Here's a false idea the conspiracy theorists missed: Different AI services are conspiring to dumb us down.

artificial intelligence / machine learning / robot reading stack of books
Thinkstock

Artificial intelligence (AI) will some day exceed human intelligence, according to fearful futurist forecasters like Elon Musk. They'll do this by iterative self-improvement, leading to a singularity in which the machines are smarter than people.

But there's a faster way. By simultaneously making people dumber while making themselves smarter, AI could hasten the singularity. Students get smart by being studious; AI is wrecking this with algorithmic social feeds designed to addict and distract. Adults get smart by reading and writing; AI is wrecking this by doing the reading and writing for us, and also tricking us into believing dumb fake news. That's the secret plan by evil machines.

OK, there is no "plan" or conspiracy or evil machines. Neither AI nor the humans who control AI are trying to make us dumber on purpose. But they are doing it by accident.

 

AI to make summary judgements

Facebook last month revealed the development of a tool that summarizes news articles so that users won't have to read them. The social network (and world's largest source of news content) plans to use it's "TLDR" feature to boil even the most fascinating and stimulating articles down into bland bullet lists.

Journalists are livid at the prospect. In a nutshell, Facebook plans to automate the process of taking content that is supported by advertising placed on the news organization's site, and offer to Facebook users instead an easy-to-read bullet point list that keeps readers on Facebook's advertiser-supported site. News organizations do all the work; Facebook collects all the money. That's the plan.

Artificial intelligence (AI) will some day exceed human intelligence, according to fearful futurist forecasters like Elon Musk. They'll do this by iterative self-improvement, leading to a singularity in which the machines are smarter than people.

But there's a faster way. By simultaneously making people dumber while making themselves smarter, AI could hasten the singularity. Students get smart by being studious; AI is wrecking this with algorithmic social feeds designed to addict and distract. Adults get smart by reading and writing; AI is wrecking this by doing the reading and writing for us, and also tricking us into believing dumb fake news. That's the secret plan by evil machines.

OK, there is no "plan" or conspiracy or evil machines. Neither AI nor the humans who control AI are trying to make us dumber on purpose. But they are doing it by accident.

AI to make summary judgements

Facebook last month revealed the development of a tool that summarizes news articles so that users won't have to read them. The social network (and world's largest source of news content) plans to use it's "TLDR" feature to boil even the most fascinating and stimulating articles down into bland bullet lists.

[ Inside AI ebook: Artificial intelligence in the enterprise ]

Journalists are livid at the prospect. In a nutshell, Facebook plans to automate the process of taking content that is supported by advertising placed on the news organization's site, and offer to Facebook users instead an easy-to-read bullet point list that keeps readers on Facebook's advertiser-supported site. News organizations do all the work; Facebook collects all the money. That's the plan.

The threat to the media business is that Facebook wants to keep users from clicking through to the media sites where the content creators would monetize their work with advertising.

The threat to human culture is even more dangerous: Facebook is proposing to use machines to actually read the articles so that humans don't have to.

Already people have a tendency to skim headlines in lieu of reading articles; Twitter even warns users now before they retweet something offering a link they haven't clicked on. The public is growing increasingly confused about the difference between having skimmed a headline and having read and understood an article. Facebook's TLDR would serve mainly to convert the minority who currently would read an article into members of the majority who merely skim.

Facebook didn't invent this idea. AI has evolved now to the point where articles can be auto-summarized. The science-paper search site Semantic Scholar is using AI to summarize research papers. An AI-based tool called SummarizeBot can summarize any text you share via Facebook Messenger or Slack. If you request "news," it will send you summaries of top news stories. Many other apps, services, sites and APIs are likely to emerge this year that boil long stories into short summaries.

Other ways AI will replace literacy

AI is sophisticated enough now to actually write. Machines can write articles, poetry and even books.

I've warned in this space about the downside of automated business writing tools, the most popular of which are Google Smart Reply (for auto-writing email replies in Gmail) and Google Smart Compose (for auto-writing emails and documents in Google Docs).

Many other tools, such as KeyWee, use AI to write marketing content.

Over time we can expect more of the content we encounter online to have been written by machines.

So what's wrong with using AI for reading and writing?

AI that summarizes or generates written content is a great convenience, which could save time and increase the number of ideas that people are exposed to.

But following the trendlines into the future: What happens when AI does much of the writing AND the reading -- activities generally referred to as literacy?

And for that matter, what is literacy, anyway? What's it for?

Writing is organized thinking. It's a cultural tool that amplifies human intelligence and enables the sharing of knowledge. Once captured in words, written thoughts can be communicated to other humans or kept as prosthetic memory of prior thoughts.

Reading is the process of connecting with and sharing the thoughts of another human. Subtle word choices and phrases trigger something mysterious in the human mind that can result in profound understanding and connection.

It's reasonable to assume and fear that tools like Facebook's "TLDR" will bulldoze humor, irony, subtle word choice, analogy, cultural references, visual language and nuance in writing -- and with it, atrophy readers' "ear" for language and shatter the connection between writer and reader.

Facebook AI will no doubt use machine learning to maximize time on platform, not to improve human understanding. A summary that incentivizes users to click through to the article will be considered a failure by Facebook, whereas a summary that keeps them glued to their News Feed will be deemed a success.

This is yet another reason why Facebook's business objectives should not be the driving force in the evolution of human culture. A dumb user who knows nothing, who can't think and who never leaves Facebook to go read something is the perfect Facebook user.

Humans get better at reading by writing and writing by reading. AI literacy-replacement tools will interfere with this feedback loop, causing both to atrophy. And the more we lose our literacy, the more we'll rely on AI to do the thinking for us.

Use it or lose it

Crooked teeth are common. That's why it was a mystery for years why prehistoric skulls almost always had straight teeth.

It turns out that with the agricultural and industrial revolutions, our food has grown softer. As a result, our jaws have become smaller without the physical stress of eating rough food, and now our teeth no longer fit in our mouths. Our food is softer because we're using machines and chemicals to pre-masticate it before we eat it.

No big deal -- we can fix this with braces, retainers and tooth extractions. Crooked teeth can be straightened. But how will we fix our crooked brains wrecked by "soft," easier-to-digest content?

Automated AI-based tools that are offered as a convenience or productivity enhancement will reduce the need for people to read and write. If we don't use this capacity -- which is, after all, the capacity to think and connect -- then we will surely lose it.

When AI does the reading and the writing for us, we will become as a species (for lack of a better term) dumber, hastening the singularity when AI is smarter than we are.

Worse than that -- it's unlikely that AI actually will have human-like intelligence. AI will be able to simulate intelligence, but arguably not actually have it (they definitely won't think the way we do). Nevertheless these AI tools will encourage us to increasingly rely on AI to our diminishment.

The replacement of literacy with AI represents a kind of final victory of numbers over words -- a victory of the number people over the word people.

That's why I would love to see more literacy advocates in AI. Instead of replacing reading and writing, we should be enhancing and encouraging it. Instead of inserting a machine mediator between humans trying to communicate with each other, we should be partnering with AI to educate and inform and bring people together. 

We should be trying to build a smarter user, not a dumber one.