Elgan on Tech

Is AI judging your personality?

Forget ‘surveillance capitalism.’ AI-based social media monitoring could cost you a job, college admission, rental property and more – and you’ll never know how it happened.

4 .social media
Getty Images

Elgan on Tech

Show More

AirBnB wants to know if you have a "Machiavellian" personality before renting you a beach house.

The company may be using software to judge whether you're trustworthy enough to rent a house based on what you post on Facebook, Twitter and Instagram.

The company owns a patent on technology designed to rate the "personalities" of prospective guests by analyzing their social media activity to decide if they're a risky guest who might damage a host's home.

The end product of their technology is to assign every AirBnB guest customer a "trustworthiness score." This will reportedly be based not only on social media activity, but other data found online, including blog posts and legal records.

The technology was developed by Trooly, which AirBnB acquired three years ago. Trooly created an AI-based tool designed to "predict trustworthy relationships and interactions," and which uses social media as one data source.

The software builds the score based on perceived "personality traits" identified by the software, include some you might predict -- conscientiousness, openness, extraversion, agreeableness -- and some weird ones – “narcissism” and "Machiavellianism," for example. (Interestingly, the software also looks for involvement in civil litigation, which suggests they may now or in the future ban people based on the prediction that they may be more likely to sue.)

AirBnB hasn't said whether they use the software or not.

If you're surprised, shocked or unhappy about this news, then you're like most people who are unaware of the huge and rapidly growing practice of judging people -- customers, citizens, employees and students -- using AI applied to social media activity.

AirBnB isn't the only organization scanning social media to judge personality or predict behavior. Others include the Department of Homeland Security, employers, school districts, police departments, the CIA, insurance companies and many others.

Some estimates say that up to half of all college admissions officers use AI-based social monitoring tools as part of the applicant selection process.

HR departments and hiring managers are also increasingly using AI social monitoring before hiring.

U.S. government agencies, especially those that employ people who need security clearances, are also leaning on social media monitoring to check for untrustworthy employees.

And, as I reported in this space, the number of smartphones U.S. Customs and Border Protection searches when people enter the U.S. grows radically every year. These searches include social media accounts, which could later be monitored and analyzed using AI.

And not only are schools increasingly monitoring social media activity of students, some states are starting to require it by law.

AI-based social media monitoring is a bandwagon. And organizations of all kinds are jumping on it.

There's only one problem.

AI-based social media monitoring isn't that smart

Various organizations have been flirting with social media monitoring for years. But recent AI-based monitoring tools have sprouted up and created an industry — and an occupational specialty.

These tools are looking for personality traits like intelligence, social responsibility, financial responsibility and behaviors like obeying the law and behaving responsibly.

The question isn’t whether AI applied to data harvesting works. It surely does. The question is whether social reveals truths about users. I'm questioning the quality of the data.

For example, scanning someone’s Instagram account may “reveal” that they’re fabulously wealthy and travel the world enjoying champagne and caviar. The truth may be that they’re broke, stressed-out wanna-be influencers who barter social exposure for hotel rooms and comped restaurant meals where they take highly manipulated photos created purely for reputation building. Some people use social to deliberately craft a deliberately false image of themselves.

A Twitter account may show a user as an upstanding, constructive and productive member of society, but a second anonymous account unknown to the social media monitoring systems would have revealed that person as a sociopathic troll who just wants to watch the world burn. People have multiple social media accounts for different aspects of their personalities. And some of them are anonymous.

And a person’s Facebook account may be peppered with a user’s outrageous sense of humor, full of profanity and exaggerations, which the monitoring tools may conclude reveals an untrustworthy personality, when in fact the problem is that machines have no sense of humor or irony. And the creators of the AI tools themselves may not have a real understanding about personality.

For example, the use of profanity online may reduce a person's trustworthiness score, based on the assumption that foul language indicates a lack of ethics or morality. But recent research suggests the opposite -- people with potty mouths may be on average more trustworthy, as well as smarter, more honest and more capable, professionally. Do we trust Silicon Valley software companies to know or care about the subtleties and complexities of human personality?

And finally, some people are obsessive non-stop users of many social media sites. Other people never use social media. Most fall somewhere in between.

There's a generational divide as well. Younger people are statistically less likely to post publicly, preferring private messaging and small-group social interaction. Is AI-based social media monitoring fundamentally ageist?

Women are more likely than men to post personal information on social media (information about oneself), whereas men are more likely than women to post impersonal information. Posting about personal matters may be more revealing about personality. Is AI-bases social media monitoring fundamentally sexist?

Is anybody even asking these questions before jumping headlong into this hyper-consequential brand of surveillance?

Companies like AirBnB are trying to solve a real problem. In AirBnB's case, they're essentially a match-making service where the “product” for one user is.... another user. And it’s a question of quality assurance. How do they minimize the harming of one user by another user?

Here’s a caveat: In the past 40 years, the tech industry always overhypes the magic pixie dust of the moment. Right now, that happens to be AI. What I fear is that companies like AirBnB have a problem, conclude that the solution is to just let AI sorcery magically solve it. They’ll turn the systems loose on social, run the algorithms, and get results. The systems will tell them who not to admit to the school, who not to hire, who to strip of their security clearance and who to ban from AirBnB.

For the people on the other end of this process, there will be no transparency into the process — no knowledge whatsoever — and no appeals process.

Did the AI reject the right people? How will anyone know?

Did some of the people deemed “trustworthy” by the AI gain that distinction by gaming the system in some way? How will anyone know?

If you scan the internet about social media monitoring, you’ll find lots of advice to “watch what you post online.” It sounds like reasonable advice, until you really think about what that advice implies. They’re basically saying that if you’re somebody who really should be fired, not hired or rejected from a school based on your social media activity, you really need to be smart and simulate or fake the social media activity of a person who isn’t objectionable.

As knowledge about the scope of social media monitoring spreads, the practice of constraining oneself on social sites — playing to the AI audience and feeding it fake data so the machines judge you trustworthy — will become commonplace.

Let me express that more starkly. Many types of organizations -- from government agencies to enterprises to Silicon Valley tech companies of all stripes -- are jumping on the AI-based social media monitoring bandwagon. Dozens of companies are emerging to specialize in these tools. The practice is growing widespread.

And when the public wakes up to the reality of this widespread practice, the response will inevitably be to change social media behavior, to push the right buttons to maintain one's "trustworthiness score," to hack the system -- thus rendering the whole thing pointless and obsolete.

It's time to start caring about AI-based social media monitoring

Here’s something you can definitely intuit from scanning the social networks — even without AI. The knowledgeable tech-loving public is generally wary and disdainful about “surveillance capitalism” practices like personal-data harvesting, web activity tracking and the very widespread practices of slurping down the contact databases of random users of various sites and apps, which uses you to gain access to the personal information of everyone you know without their knowledge and permission.

Everybody seems to talk about it. Nobody seems to like it. But it’s also true that the actual material “harm” of this kind of everyday monitoring is hard to identify.

Meanwhile, you rarely hear online conversations about AI-based social media monitoring. Yet the potential “harms” are gigantic — losing your job, rejection from school, higher insurance rates and not being allowed to rent a beach house on AirBnB.

I'm not here with "tips" for the "untrustworthy" to game the system and trick the machines into trusting. I'm here to tell you that the system can be gamed. And that AI-based social media monitoring to determine "trustworthiness" is itself... untrustworthy.