Sign up

Overcoming the AI “Creep-Factor”

How We Can Ensure More People Accept AI by Making It Feel Less Intrusive

AI is incredibly useful, but it’s so complicated that it’s challenging to get people to adopt it, at least consciously. We may like that our phones can automatically tag our photos but often forget that it’s AI powering the tool. People are hesitant to trust a technology they don’t understand well, especially if it is, well, a bit creepy.

I have exposure to the most cutting-edge AI, and I think people should be really concerned by it. AI is a fundamental risk to the existence of human civilisation.

Elon Musk at the National Governors Association, 2017

AI can sometimes predict our own behaviour more accurately than the real people we know. We don’t like feeling that we can’t control how our data is collected and used, especially when AI uses that data to get us to take a particular action. It is well known that UX is an important factor to consider when creating a model, but a lot of data scientists fail to forget a major part of a positive UX: minimising the creep-factor of AI.

“Our Phones Are Spying On Us” and Other Conspiracies Made Possible by AI

Thanks to advances in technology and the exponential availability of data, AI has gotten much more powerful — and accurate. That’s exciting for those of us passionate about technology, but it’s also part of the creepiness problem. Algorithms today can predict so much more about an individual with less information (or at least information knowingly provided). Sometimes AI can seem to read our thoughts — or at least must be secretly listening to us. Right?

That our phones are secretly recording and spying on us for marketing purposes has been a top conspiracy theory on the internet for a few years — one that even some respected tech experts and journalists think is true. It’s been endlessly debated everywhere from the BBC to Esquire to Vox. There was even a /reply-all/ podcast about your phone listening on you as early as 2017. Nowadays, almost half of Americans and increasing numbers of Europeans are concerned their phones are recording private conversations without their permission for ad targeting.

There’s no way to 100% know how much audio our phones capture, but that’s really beside the point: most experts have determined that most apps don’t use recordings for ad targeting. Not because they can’t, but because there’s no need. AI tracking is so sophisticated that it’s already this accurate without the hassle of technological and storage demands of audio collection. We inadvertently provide so much information about ourselves that we can be eerily well-targeted regardless.

We tend to think about surveillance in the way that humans do. That if somebody learned about something that you talked about with a friend that meant that they were listening to you, but I think it’s harder for most people to make the connection between how much they give away in their online activities to these companies that can target advertisements to them.

David Choffnes, Northeastern University Professor and researcher on the topic of mobile privacy and security

Newer conspiracies swirl around the many virus-tracking apps released by governments around the world. People worry that downloading these apps will allow the government to track them forever, even though it is illegal. Many have compared these apps to the type of surveillance seen more commonly in China. Here in Italy, the Immuni app has failed to gain much traction due to some of these conspiracies, as well as very real concerns about privacy and security. More fears about creepy, always-watching AI is yet another consequence of Covid-19.

All-seeing, all-knowing AI may not be even close to reality, but that doesn’t mean we’re less wary about current AI capabilities. Forget Skynet; it’s Facebook’s data collection and tracking that we’re worried about. When the most visible examples of AI in our lives seems to be powered by spying on us, it’s no wonder so many people find AI so freaky.

Can AI Ever Be Used to Provide Personalised Recommendations without Creeping Us Out?

Personalisation is a good goal. Getting the right offer at the right time helps us as much as it helps the company making that offer. Often, even people who claim to feel concerned about AI’s spread often have no problem using it if the AI helps them solve a problem more efficiently.

In reality, most AI doesn’t creep us out. Most AI we interact with every day goes completely unnoticed. Sometimes that’s because the AI isn’t sophisticated enough to bother us. It is pretty difficult to find a customer service chatbot that can only respond to precise input more menacing than irritating. Other AI fits so seamlessly into the processes that we used before that all we notice is an improvement in service. When Apple created Face ID to unlock iPhones, most people were excited by the convenience, not worried about the implications of the AI.

Most often, however, AI goes unnoticed because we only see the results. We have no direct interaction with the AI itself; we only see how it makes our lives easier. For example, no customer will likely ever know that they can purchase their desired item in the right size because the Evo Replenish algorithm used predictive supply chain to prevent a stockout of a newly popular shirt. Customers have no way of knowing how the store makes inventory decisions, and they probably wouldn’t care. The lack of empty shelves is convenient, and the AI that prevents the problem isn’t creepy because its predictions and recommendations aren’t made on the individual level. If no one is saying I personally will buy a particular product, just that 15 people will that week, I am not concerned about my privacy.

AI forecasts and personalisation unsettle us most when we can’t understand how it came to a conclusion about us personally or when the AI is too believably human but not disclosed as AI. As such, merely embracing transparency and human-centric design will go a long way towards making AI less ominous.

Non-Creepy AI Is Transparent

The more transparent you are about how your AI works, the less creepy it seems. We tend to look sceptically at things we don’t understand. When you face something that you don’t understand that seems to understand you, it’s not just suspicious; it’s creepy. Popular culture has shown us plenty of ways AI can potentially run amok. If we don’t understand how the technology works now, how could we possibly see disaster coming? Without transparency, AI feels like more of a threat — which ultimately makes AI so creepy.

Consider the theory that Facebook is listening to us through our phones again. People are so suspicious because Facebook refuses to disclose what information they collect on us and how they use that data to target ads.

[Facebook has] created this problem because they’re really good at collecting information about us. They won’t be very transparent about what they collect or how, and so, you’re basically forcing people to come up with the simplest possible solution for how Facebook knows stuff about them — and that’s that they’re listening in.

PJ Vogt, Tech Journalist and Co-host of Reply All

Transparency also means being clear about how you collected data and what you did with that information. It’s always going to seem creepy to gather tons of identifiable information on individuals if you can’t explain why you need it. That’s why we should be upfront about why particular information is requested and anonymise data whenever possible.

This data transparency doesn’t hurt AI models. At Evo, our algorithms still process data from over 1.2 billion people to make supply chain and pricing forecasts; we just scrape data in such a way that we entirely exclude any identifying information from our databases. The resulting AI recommendations are not affected, but the models become more transparent. It’s possible to be honest about what data you are using and how you sourced it without hurting the accuracy of outcomes, so long as you plan for transparency from day one.

Human-Centric AI Design Helps Us Build Useful Models That Don’t Creep Out Users

While transparency is essential, a majority of people will still forgive AI for being a little creepy if it gets them results. Surveys show that almost 75% of us don’t mind intrusive AI if it helps avoid an issue before it develops, solves a problem quickly or minimises complexity. While it may be creepy to know that you’ve been tracked if all it leads to is more spam about a product you considered buying once, we worry less about tracking that warns us of a data breach. We are willing to make compromises when the problem the AI solves helps us, not a company trying to recruit us as customers.

Take Google Maps, for example. Few people found it creepy when Maps started automatically suggesting directions for events planned in Google Calendar; it was obvious why Google had that information. People found it much creepier when Maps began making suggestions for directions to meetings or events directly from Gmail that never made it into Calendar. Still, this functionality was useful enough that few people complained. Only once Maps started predicting where you may need to go based on time of day and your habits did any real backlash begin. This recommendation felt more intrusive in part because it made the tracking more visible, but in part because it wasn’t always helpful. If you didn’t need those directions, you were unlikely to feel grateful Google knew where you habitually went.

That’s why it’s so important to solve the right problems with your AI. Human-centric design of AI prioritises the needs of the end-user and makes sure that AI does not feel intrusive. If you create an algorithm that will solve problems for everyone that interacts with the AI, you’re guaranteed to minimise negative responses. At the very least, AI should respect people’s need for privacy and a sense of control. Put the people who will interact with the AI first and empathise with their perspectives. The resulting human-centric AI feels less unsettling.

Don’t Be a Creep

AI will continue to improve our lives greatly — but only if we can trust it. If we don’t minimise its creepiness, AI cannot reach its full potential. We will all lose out on the benefits while protecting us from the potential downsides less than we think.

AI will always be complex, but it doesn’t have to be a total mystery. We can be honest about how our AI works and where it gets its data without giving up any commercial secrets. When we put people first and operate transparently, AI becomes less creepy — and our lives get easier.

Special thanks to Kaitlin Goodrich for contributing to this article.

Want to read more from Evo?

About the author

Giuseppe Craparotta is the most senior data scientist at Evo.

Before, he worked as an intern in an aerospace company and in Product Lifecycle Management consultancy. He got an MSc in Mathematical Engineering from the Polytechnic University of Turin, and he recently received a PhD in Pure and Applied Mathematics.

His research interests span across applications of advanced statistics; in particular, he is focusing on sales forecasting for fashion. He loves hiking, and singing!

Hey! Was this page helpful?

We’re always looking to make our docs better, please let us know if you have any
suggestions or advice about what’s working and what’s not!

Send Feedback